How I got Roon working over OpenVPN (hard for me, easy for you)

Hi-

So, I’m not really a networking guy. Nor an engineer. I’m stubborn, however, and often manage to make things work after breaking them. But I’ve never set up a site-to-site VPN. I need a bit of advice / navigation to kick off in the right direction.

If I have two home networks with full Unifi networks (one is UDM Pro, other is USG3), both on cable modems (one is Cox, other is Xfinity), but nothing special - no VLANs, no VPNs, no DDNS, a couple of port-forwards, thread detection, etc, I need a basic “these are the elements you need to work on” so I can go beat my head against them until they work, and I can just have my one home core, and I can play to endpoints at the other home.

I have lots of questions:

  • Do I need to set up DDNS? On both sites? Is there a preferred way to do this?
  • Which flavor of site-to-site VPN should I try to set up? OpenVPN or IPSec (which are natively supported) or do I need to try to set up Tailscale or Wireguard or something else? If all I’m trying to do is access my core from remotes at my second home, and play to remotes at my second home, is there a reason to pick one or another?
  • Do I need to set up pfsense? Not sure I understand why need another router given I have Unifi playing that role, nor where I set it up.
  • Do I need to set up @Aaron_Turner 's UPD Proxy 2020 if I’m going Unifi site-to-site? If so, is there a set of instructions somewhere on how to do that on unifi USGs for near-idiots like me? I’ve tried to read the thread a few times.
  • What security aspects do I need to worry about more once I have the two networks connected (I’m not particularly worried that if someone gets into one home they’re in the other one - if malware “gets” me, I’m not more vulnerable if they “get me in both places”, and my second home doesn’t have a lot of infrastructure)?

Honestly, I was trying to achieve this with a 2-core solution for a while, but no luck - and just got this note from Brian, so I’m getting down to the real work, and realizing that I may or may not be up to it without some real help. So thanks to anyone and everyone in advance.

Also, I may totally have missed this exact situation “dummies guide” being posted somewhere. I tried, I honestly tried, to read this thread and @Aaron_Turner 's other thread. And yes, I’ll admit it, I got lost. I often “just try stuff” from Stack Overflow. But I just didn’t know where to start on this one. So thanks.

FWIW, I will say site-to-site should totally work. Also- I’ve never gotten it to work (but I’ve also never tried!). So yeah, take this with the appropriate amount of salt…

  1. No you don’t need DDNS. Roon doesn’t use DNS at all.
  2. OpenVPN. If you read the docs and this thread, you’ll see the site-to-site IPSec VPN’s are not compatible with udp-proxy-2020. How to get OpenVPN running on your two UniFi devices is not really something I can help with.
  3. You shouldn’t need pfSense since you already have a USG3 and UDM Pro.
  4. Yes, you’ll need to install udp-proxy-2020 on BOTH devices for a site-to-site VPN.
  5. Security is a complicated question. But basically you control what traffic is passed between the two sites over the VPN via the firewall policies on either end. What you allow/deny determines that. How you think of the two sites (are they equally trusted?) also matters.

Anyways, for more info you should read the docs & startup scripts on the GitHub site which hosts udp-proxy-2020.

1 Like

Thank you! Ok, well my first step is to get really stubborn and figure out how to get OpenVPN running on both machines. What’s super-annoying is that Unifi doesn’t even make it easy to SSH into your devices - they’re using an out-of date form of fingerprint sharing, and I had to figure out how to get into ~/.ssh/config and manually add. There will be layers within layers for this one. Onward! And many thanks for helping me get started. I may need to set up DDNS and RADIUS in order to get OpenVPN working. It’ll be a fun day :slight_smile:

Ok, trying to get this running site-to-site over OpenVPN between a Unifi UDM Pro and a Unifi USG. Managed to get OpenVPN running after what seemed to me to be a lot of work. And I think I figured out how to get it running on the USG and which interfaces. The UDM is different… I think I managed to get things working, but I’ve never used docker before so I can’t tell.

  1. I ran this on the UDM:
    curl -fsL "https://raw.githubusercontent.com/unifi-utilities/unifios-utilities /HEAD/on-boot-script/remote_install.sh" | /bin/sh
    and got this:
On boot script installation finished

You can now place your scripts in /mnt/data/on_boot.d
  1. I made udp-proxy-2020.conf and changed the interfaces to br0 and tun1 (which i think is right after a bunch of poking around and ifconfig and watching port 9003) and moved it to /mnt/data/udp-proxy-2020.
  2. Then I moved 40-udp-proxy-2020.sh to /mnt/data/on_boot.d/40-udp-proxy-2020.sh and ran it.

The results were pretty quiet…

# /mnt/data/on_boot.d/40-udp-proxy-2020.sh
/mnt/data/on_boot.d/40-udp-proxy-2020.sh: line 1: s: not found
/mnt/data/on_boot.d/40-udp-proxy-2020.sh: /mnt/data/udp-proxy-2020/udp-proxy-2020.conf: line 2: te:: not found
Trying to pull docker.io/synfinatic/udp-proxy-2020:latest...
Getting image source signatures
Copying blob 9981e73032c8 done
Copying blob 442bc1c28dae done
Copying blob b97ce7cf2c96 done
Copying config 914d21726a done
Writing manifest to image destination
Storing signatures
a53721484c521b60eddacd0b1c2c67e0e4a7c3faba035f317184b04799278b67

And then back to the command prompt on the UDM.

Was this successful? How can I tell if it’s running and/or doing what it’s supposed to do? How can I tell if it is it resilient to restarts?

Thanks!

the errors reported on line 1 & 2:

/mnt/data/on_boot.d/40-udp-proxy-2020.sh: line 1: s: not found
/mnt/data/on_boot.d/40-udp-proxy-2020.sh: /mnt/data/udp-proxy-2020/udp-proxy-2020.conf: line 2: te:: not found

are bad. Not sure why you’re missing /bin/sh on your UDM, but you should change line 1 to match the location. Sorry, but I’ve never owned a UDM before and this is the first case of seeing that error.

You should see the docker container running via podman when it works correctly

Ok, I had made some stupid vi errors and had a couple missing characters at the beginning of 40-udp-proxy-2020.sh and udp-proxy-2020.conf. I think I fixed those. Now when I run /mnt/data/on_boot.d/40-udp-proxy-2020.sh I simply get the response d5d30d3462df03298486167bc5220e78ca0f75ecde5e7471fceea866d693ace7

However, when I run podman ps all I get is:

podman ps
CONTAINER ID  IMAGE                      COMMAND     CREATED      STATUS          PORTS  NAMES
f3f508478663  localhost/unifi-os:latest  /sbin/init  2 weeks ago  Up 2 weeks ago         unifi-os

Sorry for the n00b questions, I’m using a lot of tools I either have never used or which I haven’t used in more than a decade. (Using vi on my UDM was hilarious. My son thought I was hurt I was yelling so much.)

EDIT: I did manage to figure out a bit more about podman, and when I tried to get history I can see that (a) the container name does correspond to an image, and (b) there’s an issue getting any history for it because the “layer is not known”… Realize this is basically docker/podman debugging, but I just don’t know these tools. Thanks.

# podman history udp-proxy-2020
Error: error getting history of image "udp-proxy-2020": error getting image configuration for image "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": error reading image "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": layer not known

And in case this is useful, podman does believe that there is an image for udp-proxy-2020, but there is clearly another error with the images:

# podman images
ERRO[0000] error checking if image is a parent "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": error reading image "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": layer not known 
REPOSITORY                            TAG       IMAGE ID       CREATED         SIZE                       R/O
localhost/unifi-os                    latest    be4d69174995   7 weeks ago     1.82 GB                    true
localhost/unifi-os                    default   be4d69174995   7 weeks ago     1.82 GB                    true
docker.io/synfinatic/udp-proxy-2020   latest    914d21726ae4   7 months ago    12.9 MB                    false
localhost/unifi-os                    current   4a642e78ff56   10 months ago   unable to determine size   false

And, since I’m now in the business of trying to figure out tools I don’t understand, here’s one more thing to diagnose what’s going on:

# podman image tree docker.io/synfinatic/udp-proxy-2020
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1032f80]

goroutine 1 [running]:
github.com/containers/libpod/libpod/image.GetLayersMapWithImageInfo(0x40000c3310, 0x7fd14cde77, 0x23, 0x400034a430)
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/libpod/image/image.go:1468 +0x4f0
github.com/containers/libpod/pkg/adapter.(*LocalRuntime).Tree(0x400006fc20, 0x2b1f860, 0x40002f4c00, 0x400006fc00, 0x0, 0x0, 0x4000375700)
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/pkg/adapter/images.go:19 +0x78
main.treeCmd(0x2b1f860, 0x0, 0x0)
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/cmd/podman/tree.go:59 +0xb8
main.glob..func63(0x2abab20, 0x40003f4da0, 0x1, 0x1, 0x0, 0x0)
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/cmd/podman/tree.go:33 +0xb4
github.com/containers/libpod/vendor/github.com/spf13/cobra.(*Command).execute(0x2abab20, 0x40000be070, 0x1, 0x1, 0x2abab20, 0x40000be070)
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/vendor/github.com/spf13/cobra/command.go:826 +0x328
github.com/containers/libpod/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x2abc420, 0x6, 0x16fa37d, 0x8)
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/vendor/github.com/spf13/cobra/command.go:914 +0x230
github.com/containers/libpod/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/vendor/github.com/spf13/cobra/command.go:864
main.main()
	libpod-v1.6.1/vendor/src/github.com/containers/libpod/cmd/podman/main.go:160 +0xbc

Not sure what is going on… I don’t own a UDM, but that last error is a crash in podman itself, not my docker image/program. I don’t have an ARM64 system handy with Docker on it to pull and test the image myself, but I know other people have it working.

I wonder if you have an incomplete/corrupted image cached locally? On thing I notice is that the the imageID of 914d21726ae4 doesn’t match any of the images I have posted: Docker

So I would delete your local image and try pulling it again.

So this is surely the issue, here’s what I did and what happened… TL;dr I got the same imageID:

# docker images -a
ERRO[0000] error checking if image is a parent "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": error reading image "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": layer not known 
REPOSITORY                            TAG       IMAGE ID       CREATED         SIZE                       R/O
localhost/unifi-os                    latest    be4d69174995   7 weeks ago     1.82 GB                    true
localhost/unifi-os                    default   be4d69174995   7 weeks ago     1.82 GB                    true
docker.io/synfinatic/udp-proxy-2020   latest    914d21726ae4   7 months ago    12.9 MB                    false
localhost/unifi-os                    current   4a642e78ff56   10 months ago   unable to determine size   false
# docker rmi 914d21726ae4
Untagged: docker.io/synfinatic/udp-proxy-2020:latest
Deleted: 914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b
# docker images -a
ERRO[0000] error checking if image is a parent "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": error reading image "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": layer not known 
REPOSITORY           TAG       IMAGE ID       CREATED         SIZE                       R/O
localhost/unifi-os   latest    be4d69174995   7 weeks ago     1.82 GB                    true
localhost/unifi-os   default   be4d69174995   7 weeks ago     1.82 GB                    true
localhost/unifi-os   current   4a642e78ff56   10 months ago   unable to determine size   false 
# 
# docker pull synfinatic/udp-proxy-2020
Trying to pull docker.io/synfinatic/udp-proxy-2020...
Getting image source signatures
Copying blob 442bc1c28dae done
Copying blob b97ce7cf2c96 done
Copying blob 9981e73032c8 done
Copying config 914d21726a done
Writing manifest to image destination
Storing signatures
914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b
# 

So I’m truly sorry that I don’t understand docker better - all these tools theoretically make things easier to do, but I’m hopelessly out of date.

I tried to be helpful to search the docker hub to see if there were any images with this ID, but doesn’t seem like there are. See link to Docker hub search for my imageID.

For debugging purposes, I’m running UniFi OS UDM Pro 1.12.33 and it is definitely a UDM Pro with ARM 64-bit 4 cores.

Ack. Now, just for giggles I tried running what I think the shell script would produce, and I got this:

# podman run -i -d --rm --net=host --name udp-proxy-2020 -e PORTS=9003 -e INTERFACES=br0,tun1 -e TIMEOUT=250 -e CACHETTL=90 synfinatic/udp-proxy-2020:latest
6ad4dc9acd2aaba1de1e476a313c3369b43a5c1f1e1663470e549d5b84cdb3c

And then I checked, and lo and behold:

# podman ps
CONTAINER ID  IMAGE                                       COMMAND               CREATED        STATUS            PORTS  NAMES
6ad4dc9acd2a  docker.io/synfinatic/udp-proxy-2020:latest  /bin/sh -c /usr/l...  5 minutes ago  Up 5 minutes ago         udp-proxy-2020
f3f508478663  localhost/unifi-os:latest                   /sbin/init            2 weeks ago    Up 2 weeks ago           unifi-os

But if I kill, remove, and then try to run the script again, I get no containers running.

remote debugging is always hard, but if you look at the script it does:

if podman container exists ${CONTAINER} ; then
    podman start ${CONTAINER}
else
    podman run -i -d --rm --net=host --name ${CONTAINER} \
        -e PORTS=${PORTS} -e INTERFACES=${INTERFACES} \
        -e TIMEOUT=${TIMEOUT} -e CACHETTL=${CACHETTL} \
        -e EXTRA_ARGS="${EXTRA_ARGS}" \
        synfinatic/udp-proxy-2020:${TAG}
fi

So one of those podman commands is running and failing. if you want, just prefix both commands with echo and it will print the command instead of running it and that will tell you which. Then you can debug.

also, what are the contents of /mnt/data/udp-proxy-2020/udp-proxy-2020.conf ?

Ok.

Sorry, there was a lot of information in here, most of it confusing. There were a bunch of conflicting debug options in there, so I had to delete them. Whether we have enough information now, I don’t know. But now the output of the shell script (when I echo it) is:

# podman run --log-level=debug -i -d --rm --net=host --name udp-proxy-2020 -e PORTS=9003 -e INTERFACES=br0,tun1 -e TIMEOUT=250 -e CACHETTL=90 -e EXTRA_ARGS= synfinatic/udp-proxy-2020:latest

And when I run the script with no echo (not noecho, but no echo), I get this:

# /mnt/data/on_boot.d/40-udp-proxy-2020.sh
DEBU[0000] using conmon: "/usr/libexec/podman/conmon"   
DEBU[0000] Initializing boltdb state at /mnt/data/podman/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver                           
DEBU[0000] Using graph root /mnt/data/podman/storage    
DEBU[0000] Using run root /var/run/containers/storage   
DEBU[0000] Using static dir /mnt/data/podman/storage/libpod 
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path /mnt/data/podman/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] overlay: imagestore=/var/lib/containers/storage 
DEBU[0000] overlay: skip_mount_home=false               
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is not being used 
DEBU[0000] cached value indicated that native-diff is usable 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
INFO[0000] [graphdriver] using prior storage driver: overlay 
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/10-libpod.conflist 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]docker.io/synfinatic/udp-proxy-2020:latest" 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] exporting opaque data as blob "sha256:914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] exporting opaque data as blob "sha256:914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] Using host netmode                           
DEBU[0000] setting container name udp-proxy-2020        
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 1 for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] exporting opaque data as blob "sha256:914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] created container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" 
DEBU[0000] container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" has work directory "/mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata" 
DEBU[0000] container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" has run directory "/var/run/containers/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata" 
DEBU[0000] New container created "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" 
DEBU[0000] container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" has CgroupParent "/libpod_parent/libpod-1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" 
DEBU[0000] overlay: mount_data=nodev,lowerdir=/mnt/data/podman/storage/overlay/l/N7XV33BVS5VM4IC4KLWJO3BRSF:/mnt/data/podman/storage/overlay/l/RX3ZB32DEEMUSS5AEID7WS23VA:/mnt/data/podman/storage/overlay/l/3FZIP53NLCOPDJ2TG2QBFCN5PN,upperdir=/mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/diff,workdir=/mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/work 
DEBU[0000] mounted container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" at "/mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/merged" 
DEBU[0000] Created root filesystem for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 at /mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroup path for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 to /libpod_parent/libpod-1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 at /mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/config.json 
DEBU[0000] /usr/libexec/podman/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/libexec/podman/conmon    args="[--api-version 1 -c 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 -u 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 -r /usr/bin/runc -b /mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata -p /var/run/containers/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/pidfile -l k8s-file:/mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -i --conmon-pidfile /var/run/containers/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /mnt/data/podman/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-opt --exit-command-arg .imagestore=/var/lib/containers/storage --exit-command-arg --storage-opt --exit-command-arg .skip_mount_home=false --exit-command-arg --storage-opt --exit-command-arg .mountopt=nodev --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus.effective: no such file or directory 
DEBU[0000] Received: 14403                              
INFO[0000] Got Conmon PID as 14386                      
DEBU[0000] Created container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 in OCI runtime 
DEBU[0000] Starting container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 with command [/bin/sh -c /usr/local/bin/udp-proxy-2020 --port $PORTS     --interface $INTERFACES --timeout $TIMEOUT     --cache-ttl $CACHETTL $EXTRA_ARGS] 
DEBU[0000] Started container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 
1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6

And lo and behold, that results in a running container!

# podman ps
CONTAINER ID  IMAGE                                       COMMAND               CREATED        STATUS            PORTS  NAMES
1e0372f9a44b  docker.io/synfinatic/udp-proxy-2020:latest  /bin/sh -c /usr/l...  4 seconds ago  Up 4 seconds ago         udp-proxy-2020
f3f508478663  localhost/unifi-os:latest                   /sbin/init            3 weeks ago    Up 2 hours ago           unifi-os

Now, I have no idea if it’s actually doing anything - but I have reliably figured out vi and how to use it well enough to figure out that it was all the debug options that were killing me. At this point it’s repeatable - I can kill that container and relaunch it using the script.

And the image is the correct one, which I found by learning the --digests option (the f35c82df serial matches the one that’s on the docker page):

# podman images --digests
ERRO[0000] error checking if image is a parent "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": error reading image "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": layer not known 
REPOSITORY                            TAG       DIGEST                                                                    IMAGE ID       CREATED         SIZE                       R/O
localhost/unifi-os                    latest    sha256:877043d617062e5e1171151f6675f7a201a055f61d5a10447e6271c8faf2bebe   be4d69174995   7 weeks ago     1.82 GB                    true
localhost/unifi-os                    default   sha256:877043d617062e5e1171151f6675f7a201a055f61d5a10447e6271c8faf2bebe   be4d69174995   7 weeks ago     1.82 GB                    true
docker.io/synfinatic/udp-proxy-2020   latest    sha256:f35c82dfbdacfeb62b1e0dea5e8d9bb98014d21b22353150d0790daf699c8eeb   914d21726ae4   7 months ago    12.9 MB                    false
localhost/unifi-os                    current   sha256:4c84adfc305424cfbb0ca4f2cbffdbccb8a9b12152a39e0ee1a6820ac2dfd57f   4a642e78ff56   10 months ago   unable to determine size   false

Now @Aaron_Turner what should I do to see if I can determine if this is actually doing what it’s supposed to?

EDIT: HOLY CRAP! It is doing something right… I got it to work over Teleport (which is Unifi’s white-labeled WireGuard). I added tlprt0 to the interfaces line in the config, killed and relaunched, and then I was able to (a) over teleport (with WiFi off/over cellular) see my iPhone in the list of zones from the Roon remote even though it is a private zone, and (b) successfully play to it! So that’s something! It’s not what I came here trying to do, but I feel pretty psyched that I did it as a side-effect!


^^ Note there’s no wi-fi symbol there - I’m accessing over cellular, and I will note again that I feel like a total badass.

And… in the perhaps good news department, once I disconnect my iPhone from Teleport, the interface must disappear, because I get lines like this in podman logs cbd1e4adc7ae, I assume that they won’t build up forever?

level=warning msg="Unable to send 132 bytes from br0 out tlprt0: send: No such device or address"

Now I just don’t know if it’s also broadcasting to tun0 - I can’t see my remote home zones yet, even though I think that I have a functional version of udp-proxy-2020 working on my USG.

Aaron / others-

Good news - I can reliably start both the UDM ARM64 docker and the mips USG package! Sorry for the level of detail above.

Question - Is there any way to enable the same level of logging in the docker container version as there is in the mips package of udp-proxy-2020 for mips arch ? I know how to get “-L trace” working on my USG, but on my UDM pro I cannot get any verbose logging on the docker/ARM64 package.

Fwiw, I’m getting all kinds of activity on the logs for the USG (see below, not sure if they’re good or not):

DEBUG   vtun64: Unable to send packet; no discovered clients 
DEBUG   eth1: received packet and fowarding onto other interfaces 
DEBUG   vtun64: sending out because we're not eth1   
DEBUG   eth1: received packet and fowarding onto other interfaces 
DEBUG   vtun64: sending out because we're not eth1   
DEBUG   processing packet from eth1 on vtun64        
DEBUG   eth1: received packet and fowarding onto other interfaces 
DEBUG   vtun64: Unable to send packet; no discovered clients 
DEBUG   processing packet from eth1 on vtun64        
DEBUG   vtun64: sending out because we're not eth1   
DEBUG   vtun64: Unable to send packet; no discovered clients 
DEBUG   processing packet from eth1 on vtun64        
DEBUG   vtun64: Unable to send packet; no discovered clients 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker       

But when I run podman logs on the container on the UDM they’re always empty (I can get some activity by connecting to teleport and then disconnecting, when I get an error that it’s trying to send activity to teleport and that interface doesn’t exist, but that’s unimportant to what I’m actually trying to achieve which is site-to-site over OpenVPN):

# podman ps
CONTAINER ID  IMAGE                                       COMMAND               CREATED        STATUS            PORTS  NAMES
35f80864ba28  docker.io/synfinatic/udp-proxy-2020:latest  /bin/sh -c /usr/l...  8 minutes ago  Up 8 minutes ago         udp-proxy-2020
f3f508478663  localhost/unifi-os:latest                   /sbin/init            3 weeks ago    Up 24 hours ago          unifi-os
# podman logs 35f80864ba28
# 

Any help would be appreciated. If I could see what’s actually happening at a verbose level on the UDM I bet I’d be better able to help myself. Although perhaps the “no clients discovered on vtun64” that is happening on the USG is instructive.

Also, thanks to @j_a_m_i_e I learned how to monitor ports. Lots of activity on my UDM on br0, none on tun1 while I opened and closed Roon remotes several times. Wish I could see what the udp-proxy-2020 container is “doing”.

Results on tun1:

# tcpdump -i tun1 udp port 9003
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun1, link-type RAW (Raw IP), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

Results on br0:

# tcpdump -i br0 udp port 9003
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:29:51.025297 IP ROCK.localdomain.51544 > unifi.localdomain.9003: UDP, length 98
12:29:51.025366 IP ROCK.localdomain.51544 > unifi.localdomain.9003: UDP, length 98
12:29:51.187695 IP ROCK.localdomain.51544 > 192.168.2.1.9003: UDP, length 98
12:29:51.187847 IP ROCK.localdomain.51544 > 192.168.2.1.9003: UDP, length 98
12:29:51.197447 IP ROCK.localdomain.51544 > 192.168.1.255.9003: UDP, length 98
12:29:51.197515 IP ROCK.localdomain.51544 > 192.168.1.255.9003: UDP, length 98
12:29:52.765440 IP 192.168.1.223.51301 > 192.168.1.255.9003: UDP, length 332
12:29:52.765581 IP 192.168.1.223.53749 > 192.168.1.255.9003: UDP, length 332
12:29:52.765820 IP 192.168.1.223.49781 > 192.168.1.255.9003: UDP, length 332
12:29:52.765960 IP 192.168.1.223.62739 > 192.168.1.255.9003: UDP, length 332
12:29:52.766281 IP 192.168.1.223.63270 > 192.168.1.255.9003: UDP, length 332
12:29:55.354364 IP 192.168.1.223.50596 > 192.168.1.255.9003: UDP, length 332
12:29:55.354498 IP 192.168.1.223.65220 > 192.168.1.255.9003: UDP, length 332
12:29:55.355595 IP 192.168.1.223.50865 > 192.168.1.255.9003: UDP, length 332
12:29:55.355765 IP 192.168.1.223.61654 > 192.168.1.255.9003: UDP, length 332
12:29:55.356505 IP 192.168.1.223.60946 > 192.168.1.255.9003: UDP, length 332
12:30:12.057698 IP 192.168.1.223.52449 > 192.168.1.255.9003: UDP, length 332
12:30:12.058016 IP 192.168.1.223.54712 > 192.168.1.255.9003: UDP, length 332
12:30:12.058164 IP 192.168.1.223.50889 > 192.168.1.255.9003: UDP, length 332
12:30:12.058260 IP 192.168.1.223.60735 > 192.168.1.255.9003: UDP, length 332
12:30:12.058635 IP 192.168.1.223.62276 > 192.168.1.255.9003: UDP, length 332
12:30:12.118974 IP 192.168.1.223.50578 > unifi.localdomain.9003: UDP, length 98
12:30:12.124379 IP 192.168.1.223.50578 > Samsung.localdomain.9003: UDP, length 98
12:30:12.238112 IP ROCK.localdomain.51544 > unifi.localdomain.9003: UDP, length 98
12:30:12.238175 IP ROCK.localdomain.51544 > unifi.localdomain.9003: UDP, length 98
12:30:12.240707 IP 192.168.1.223.50578 > 192.168.1.255.9003: UDP, length 98
12:30:12.400187 IP ROCK.localdomain.51544 > 192.168.2.1.9003: UDP, length 98
12:30:12.400325 IP ROCK.localdomain.51544 > 192.168.2.1.9003: UDP, length 98
12:30:12.409888 IP ROCK.localdomain.51544 > 192.168.1.255.9003: UDP, length 98
12:30:12.411012 IP ROCK.localdomain.51544 > 192.168.1.255.9003: UDP, length 98
12:30:13.242629 IP 192.168.1.223.50578 > unifi.localdomain.9003: UDP, length 98
12:30:13.243807 IP 192.168.1.223.50578 > unifi.localdomain.9003: UDP, length 98
12:30:13.248040 IP 192.168.1.223.50578 > Samsung.localdomain.9003: UDP, length 98
12:30:13.248088 IP 192.168.1.223.50578 > Samsung.localdomain.9003: UDP, length 98
12:30:13.424685 IP 192.168.1.223.50578 > unifi.localdomain.9003: UDP, length 98
12:30:13.518491 IP 192.168.1.223.50578 > 192.168.1.255.9003: UDP, length 98
12:30:13.519834 IP 192.168.1.223.50578 > 192.168.1.255.9003: UDP, length 98
12:30:13.522727 IP 192.168.1.223.50578 > 192.168.1.255.9003: UDP, length 98
12:30:13.687956 IP 192.168.1.223.63050 > 192.168.1.255.9003: UDP, length 332
12:30:13.688944 IP 192.168.1.223.57842 > 192.168.1.255.9003: UDP, length 332
12:30:13.689100 IP 192.168.1.223.52474 > 192.168.1.255.9003: UDP, length 332
12:30:13.689217 IP 192.168.1.223.54627 > 192.168.1.255.9003: UDP, length 332
12:30:13.689330 IP 192.168.1.223.54607 > 192.168.1.255.9003: UDP, length 332
12:30:14.402462 IP 192.168.1.223.50578 > unifi.localdomain.9003: UDP, length 98
12:30:14.687082 IP 192.168.1.223.50578 > 192.168.1.255.9003: UDP, length 98
12:30:15.283267 IP rooExtend-Acrylic-PiZero.localdomain.48756 > 192.168.1.255.9003: UDP, length 104
12:30:15.581235 IP rooExtend-Acrylic-PiZero.localdomain.48756 > 192.168.1.255.9003: UDP, length 104
12:30:24.513224 IP rooExtend-Acrylic-PiZero.localdomain.44870 > 192.168.1.255.9003: UDP, length 104
12:30:25.145439 IP rooExtend-Acrylic-PiZero.localdomain.44870 > 192.168.1.255.9003: UDP, length 104
12:30:48.116742 IP ROCK.localdomain.51544 > unifi.localdomain.9003: UDP, length 98
12:30:48.286445 IP ROCK.localdomain.51544 > 192.168.2.1.9003: UDP, length 98
12:30:48.298106 IP ROCK.localdomain.51544 > 192.168.1.255.9003: UDP, length 98

Thanks!

You can specify a custom logging level using the EXTRA_ARGS environment variable for the docker container.

Regarding Roon clients at your other (remote) location, they are Roon clients? Or are they ApplePlay/etc? FWIW, I’ve zero testing with ApplePlay or other non-native Roon clients. They use a totally different discovery protocol than the native Roon discovery protocol on UDP/9003.

Sorry to be incredibly dense, but I can’t figure out what syntax to use to specify the logging level. If I set -e EXTRA_ARGS=-trace then the container doesn’t start. If I set -e EXTRA_ARGS=-debug-trace the container starts but podman logs [serial #] is still empty. I really apologize, but I’m so new to this stuff. -e EXTRA_ARGS=-debug I think works as its suppose to, but there’s nothing logged (except when I log into teleport and then out and then it says "tried to send stuff to tlprt0 but the interface doesn’t exist’, which is reasonable). I gave myself 45 minutes to try and solve this myself but found myself trying to read your golang files on github, and that was non-productive.

Also, I’ll note that as far as I can tell, OpenVPN is doing what it’s supposed to do. When I disconnect from my core, and choose which core to log into, I can see both my primary and secondary home’s cores. I can log into either. I can also VPN into my secondary home (I have a L2TP VPN) and if I open the Roon controller I can see both my secondary home core and my primary home core. But, as expected given where I am in this journey, they can each only see the endpoints that are on their local subnets.

This isn’t the point of my post, but this is SOOO cool.


(Although I should say that this isn’t symmetric. If I’m VPNed in to my secondary home that “sends all traffic via VPN” then I can see both cores. I can log into either core (primary home or secondary home) and control all local endpoints. But… if I turn off the L2TP VPN and am on my primary home network, then I can only see my primary home core.

From the built in help:
-L, --level="info" Log level [trace|debug|info|warn|error]

So EXTRA_ARGS="-L debug" or EXTRA_ARGS="-L trace". There are no --debug or --trace options.

Wait, you have two cores? Guess I missed that part. Not sure what you’re trying to do??? Each endpoint can only register to a single core. So if you have two cores, you won’t be able to see both sets of endpoints.

Thanks, sorry I’m such a n00b. I didn’t know how to get built-in help to work on podman (as in I don’t know how to execute udp-proxy-2020 except in a container, and if you execute it in a container I don’t know where the help output goes). I’ll try my best.

In any case, just to recap, trying to set up a single core to cover two homes. There was a core running at the secondary home behind the USG, but for now I’ve shut it down. The basic setup is:
Roon Core + endpoints -> UDM Pro (udp-proxy-2020) <------> USG 3 (udp-proxy-2020) --> endpoints
The Core is running at 192.168.1.100 on a subnet of 192.168.0.0/23, and the secondary home network is running on 192.168.10.0/23. They are connected by a site-to-site OpenVPN configured on the UDM Pro and USG 3, and the interfaces we want are br0 and tun1 on the UDM, and eth1 and vtun64 on the USG. I want to be able to run a remote at either location and have it work - to have the Core behind the UDM work with endpoints both behind the UDM and the USG.

The command that the shell script is running on the UDM is:

podman run -i -d --rm --net=host --name udp-proxy-2020 -e PORTS=9003 -e INTERFACES=br0,tun1 -e TIMEOUT=250 -e CACHETTL=90 -e EXTRA_ARGS=-L trace synfinatic/udp-proxy-2020:latest

The command that is running on the USG is:

sudo ./udp-proxy-2020 -L trace --port 9003 --interface eth1,vtun64 --fixed-ip=vtun64@192.168.1.100

(I’m not sure if the --fixed-ip is needed, but I’ve tried it both with and without. 192.168.51.51 is the OpenVPN tunnel IP on the UDM); results are very similar.

Results for the logs of the UDM are an endlessly repeated set of:

# podman logs 20f9a3409b6a
level=debug msg="br0: ifIndex: 12"
level=debug msg="br0 network: ip+net\t\tstring: 192.168.1.1/23"
level=debug msg="br0 network: ip+net\t\tstring: 2601:19b:700:518d::1/64"
level=debug msg="br0 network: ip+net\t\tstring: fe80::68d7:9aff:fe29:75aa/64"
level=debug msg="Listen: (main.Listen) {\n iname: (string) (len=3) \"br0\",\n netif: (*net.Interface)(0x40000573c0)({\n  Index: (int) 12,\n  MTU: (int) 1500,\n  Name: (string) (len=3) \"br0\",\n  HardwareAddr: (net.HardwareAddr) (len=6 cap=41524) 6a:d7:9a:29:75:aa,\n  Flags: (net.Flags) up|broadcast|multicast\n }),\n ports: ([]int32) (len=1 cap=1) {\n  (int32) 9003\n },\n ipaddr: (string) (len=13) \"192.168.1.255\",\n promisc: (bool) false,\n handle: (*pcap.Handle)(<nil>),\n writer: (*pcapgo.Writer)(<nil>),\n inwriter: (*pcapgo.Writer)(<nil>),\n outwriter: (*pcapgo.Writer)(<nil>),\n timeout: (time.Duration) 250ms,\n clientTTL: (time.Duration) 0s,\n sendpkt: (chan main.Send) (cap=100) 0x4000058660,\n clients: (map[string]time.Time) {\n }\n}\n"
level=debug msg="tun1: ifIndex: 43"
level=debug msg="Listen: (main.Listen) {\n iname: (string) (len=4) \"tun1\",\n netif: (*net.Interface)(0x400027a700)({\n  Index: (int) 43,\n  MTU: (int) 1500,\n  Name: (string) (len=4) \"tun1\",\n  HardwareAddr: (net.HardwareAddr) ,\n  Flags: (net.Flags) up|pointtopoint|multicast\n }),\n ports: ([]int32) (len=1 cap=1) {\n  (int32) 9003\n },\n ipaddr: (string) \"\",\n promisc: (bool) true,\n handle: (*pcap.Handle)(<nil>),\n writer: (*pcapgo.Writer)(<nil>),\n inwriter: (*pcapgo.Writer)(<nil>),\n outwriter: (*pcapgo.Writer)(<nil>),\n timeout: (time.Duration) 250ms,\n clientTTL: (time.Duration) 0s,\n sendpkt: (chan main.Send) (cap=100) 0x4000058720,\n clients: (map[string]time.Time) {\n }\n}\n"
level=debug msg="br0: applying BPF Filter: (udp port 9003) and (src net 192.168.0.0/23)"
level=debug msg="Opened pcap handle on br0"
level=debug msg="tun1: applying BPF Filter: (udp port 9003) and (src net 192.168.51.51/32)"
level=debug msg="Opened pcap handle on tun1"
level=debug msg="Initialization complete!"
level=debug msg="handlePackets(br0) ticker"
level=debug msg="handlePackets(tun1) ticker"
level=debug msg="handlePackets(tun1) ticker"
level=debug msg="handlePackets(br0) ticker"
level=debug msg="handlePackets(tun1) ticker"
level=debug msg="handlePackets(br0) ticker"
level=debug msg="handlePackets(br0) ticker"
level=debug msg="handlePackets(tun1) ticker"
level=debug msg="br0: received packet and fowarding onto other interfaces"
level=debug msg="tun1: sending out because we're not br0"
level=debug msg="processing packet from br0 on tun1"
level=debug msg="tun1: Unable to send packet; no discovered clients"
level=debug msg="handlePackets(br0) ticker"
level=debug msg="handlePackets(tun1) ticker"
level=debug msg="handlePackets(br0) ticker"
level=debug msg="handlePackets(tun1) ticker"

The logs on the USG basically look like this:

DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   eth1: received packet and fowarding onto other interfaces 
DEBUG   vtun64: sending out because we're not eth1   
DEBUG   eth1: received packet and fowarding onto other interfaces 
DEBUG   processing packet from eth1 on vtun64        
DEBUG   vtun64 => 192.168.51.51: packet len: 132     
DEBUG   vtun64: sending out because we're not eth1   
DEBUG   eth1: received packet and fowarding onto other interfaces 
DEBUG   vtun64: sending out because we're not eth1   
DEBUG   processing packet from eth1 on vtun64        
DEBUG   vtun64 => 192.168.51.51: packet len: 132     
DEBUG   processing packet from eth1 on vtun64        
DEBUG   vtun64 => 192.168.51.51: packet len: 132     
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker                 
DEBUG   handlePackets(eth1) ticker                   
DEBUG   handlePackets(vtun64) ticker           

I can see you’ve dealt with this exact set of errors before - the level=debug msg="tun1: Unable to send packet; no discovered clients" seems to mean that no packets are being passed from the br0 on the UDM to tun1.

I think that’s confirmed because when I run a tcpdump on the UDM on br0 while restarting Roon I get the following:

# tcpdump -ni br0 -s 0 udp port 9003
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
00:32:39.888512 IP 192.168.0.91.64282 > 192.168.1.255.9003: UDP, length 339
00:32:39.888660 IP 192.168.0.91.51259 > 192.168.1.255.9003: UDP, length 339
00:32:39.888769 IP 192.168.0.91.50307 > 192.168.1.255.9003: UDP, length 339
00:32:45.057133 IP 192.168.1.32.35879 > 192.168.1.255.9003: UDP, length 104
00:32:53.214127 IP 192.168.1.100.51544 > 192.168.1.1.9003: UDP, length 98
00:32:53.214387 IP 192.168.1.100.51544 > 192.168.1.1.9003: UDP, length 98
00:32:53.271749 IP 192.168.0.91.58545 > 192.168.1.1.9003: UDP, length 98
00:32:53.389162 IP 192.168.1.100.51544 > 192.168.2.1.9003: UDP, length 98
00:32:53.399205 IP 192.168.1.100.51544 > 192.168.1.255.9003: UDP, length 98
00:32:53.399241 IP 192.168.1.100.51544 > 192.168.2.1.9003: UDP, length 98
00:32:53.409449 IP 192.168.1.100.51544 > 192.168.1.255.9003: UDP, length 98
00:32:53.446159 IP 192.168.0.91.58545 > 192.168.1.1.9003: UDP, length 265
00:32:53.496647 IP 192.168.0.91.58545 > 192.168.1.255.9003: UDP, length 98
00:32:53.602706 IP 192.168.0.91.58545 > 192.168.1.255.9003: UDP, length 265
00:32:54.177941 IP 192.168.0.91.58545 > 192.168.1.1.9003: UDP, length 98
00:32:54.267409 IP 192.168.0.91.58545 > 192.168.1.1.9003: UDP, length 98
00:32:54.448583 IP 192.168.0.91.58545 > 192.168.1.1.9003: UDP, length 98
00:32:54.601996 IP 192.168.0.91.58545 > 192.168.1.255.9003: UDP, length 98
00:32:54.762812 IP 192.168.0.91.58545 > 192.168.1.255.9003: UDP, length 98
00:32:54.780736 IP 192.168.0.91.58545 > 192.168.1.255.9003: UDP, length 98
00:32:54.971433 IP 192.168.0.91.50664 > 192.168.1.255.9003: UDP, length 339
00:32:54.971634 IP 192.168.0.91.65506 > 192.168.1.255.9003: UDP, length 339
00:32:54.971779 IP 192.168.0.91.55883 > 192.168.1.255.9003: UDP, length 339
00:32:55.379288 IP 192.168.0.91.58545 > 192.168.1.1.9003: UDP, length 98
00:32:55.804988 IP 192.168.0.91.58545 > 192.168.1.255.9003: UDP, length 98

But when I run the tcpdump on the UDM on tun1 and close down and restart Roon, I’m only seeing inbound traffic from a single RAAT endpoint at 192.168.10.30 behind the USG (which must be working at passing packets in the other direction):

# tcpdump -ni tun1 -s 0 udp port 9003
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun1, link-type RAW (Raw IP), capture size 262144 bytes
00:34:53.273009 IP 192.168.10.30.52502 > 192.168.51.51.9003: UDP, length 104
00:34:53.278286 IP 192.168.10.30.52502 > 192.168.51.51.9003: UDP, length 104
00:34:53.278327 IP 192.168.10.30.55085 > 192.168.51.51.9003: UDP, length 104
00:35:53.395630 IP 192.168.10.30.52502 > 192.168.51.51.9003: UDP, length 104
00:35:53.395705 IP 192.168.10.30.52502 > 192.168.51.51.9003: UDP, length 104
00:35:53.395740 IP 192.168.10.30.55085 > 192.168.51.51.9003: UDP, length 104

In case it’s useful, here are the relevant portions of ifconfig from the UDM:

br0       Link encap:Ethernet  HWaddr 6A:D7:9A:29:75:AA  
          inet addr:192.168.1.1  Bcast:0.0.0.0  Mask:255.255.254.0
          inet6 addr: 2601:19b:700:518d::1/64 Scope:Global
          inet6 addr: fe80::68d7:9aff:fe29:75aa/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:20115653 errors:0 dropped:1 overruns:0 frame:0
          TX packets:81190735 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:9794685569 (9.1 GiB)  TX bytes:109902935292 (102.3 GiB)

tun1      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:192.168.51.51  P-t-P:192.168.50.50  Mask:255.255.255.255
          inet6 addr: fe80::4107:b289:2df4:c8ce/64 Scope:Link
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:43792 errors:0 dropped:0 overruns:0 frame:0
          TX packets:40050 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:38001308 (36.2 MiB)  TX bytes:3281215 (3.1 MiB)

Hope this is helpful. Many many thanks.

again, what kinds of client(s) are on the remote end of the VPN opposite from the Roon Core.

The reason things work on your phone, is because when you open the Roon app it attempts to discover the Core. This teaches udp-proxy-2020 where your phone is so when the packets are forwarded to the Core and the core responds, udp-proxy-2020 can forward the packets back to your phone.

If you have an endpoint on the far side of the VPN and it doesn’t attempt to discover a Core, then there is no packets for udp-proxy-2020 to learn about or forward.

Based on the tcpdump on tun1 that seems to indicate traffic from some RAAT endpoint on the USG network is indeed trying to discover your core and the packets are being forwarded by udp-proxy-2020 on the USG over the VPN. Hard to say what is going on after that because not having the data.

The good news is that udp-proxy-2020 has debug features for capturing pcaps that are consistent:

  -P, --pcap                       Generate pcap files for debugging
  -d, --pcap-path="/root"          Directory to write debug pcap files

Note that if you’re running this on the UDM, then you need to do a volume mount for /root or some other path so the pcap files are accessible. One you have those, you can open a ticket on GitHub.com and attach them there.

To all the Roon Community Members following this forum thread,

I know that many of us have been seeking an effective and efficient way to run Roon on a cloud machine and connect it to our home networks via a layer 2 VPN. While there have been some successful attempts, the scattered information on the forums can make it quite challenging to find a comprehensive guide.

I am sharing a simple, easy-to-follow setup procedure that I’ve documented, utilizing a Linux cloud VPS and an OpenVPN layer 2 VPN. I’m delighted to report that the performance has been absolutely flawless, and I am confident that this guide will help many Roon users achieve the same satisfaction.

To foster a collaborative environment and avoid the information fragmentation often seen in forums, I’ve decided to host this guide on GitHub. This platform allows you, the Roon community, to propose updates and improvements using standard GitHub practices (fork, commit, pull request).

Here is the link to the guide:
https://github.com/drsound/roon-cloud-setup

By sharing this guide, I hope to streamline the process of operating Roon on a cloud machine for many Roon users. I look forward to your valuable feedback, suggestions, and contributions.

Happy listening!

Best regards,
Alessandro

1 Like

This is quite an amazing first post! I’m sure this is going to be very interesting and useful to a number of people!

1 Like

Question for the smart folks on this thread. Does the Unifi Site Magic (which creates an SD-WAN) make it any easier to connect in a site-to-site set-up (Roon core/endpoints/remotes at home A, but also endpoints and remotes at home B)?

Details and links to posts here:

I’m trying to use udp-proxy-2020 to enable Roon clients as endpoints on a vlan other than the one Roon sits on. It’s not working and I’m hoping to get a hand from @Aaron_Turner or someone who may have already pulled this off.

I have two relevant vlans: “main” and “music”.

Roon (as well as all endpoints and extension-related hardware/software) is on the music vlan, the clients I’m trying to enable as endpoints are on main. The clients work fine as controllers.

I have a Synology NAS which has ethernet connections to both networks. Main is eth0, music is eth1. Static routes are defined that direct all traffic on these subnets to their respective gateways. This may be important with respect to being able to use loopback.

Roon runs in a Docker container on this Synology device. Docker has a macvlan set up on the music vlan.

I’m trying to run udp-proxy-2020 in a Docker container. My compose is below followed by the log output I get when running it. You’ll see that I’m running it as host. I’ve also tried it on the music vlan. Neither approach works.

Log output from a run as host is below the compose file.

Any ideas at all will be appreciated.

Thanks!

version: '3.7'

services:
  udp-proxy:
    environment:
      - TZ=America/Los_Angeles
    image: synfinatic/udp-proxy-2020
    container_name: udp-proxy-2020
    restart: unless-stopped
    network_mode: host

    labels:
      - "com.centurylinklabs.watchtower.scope=iot-containers"

    command: udp-proxy-2020 --port 9003 --interface eth0,eth1 --cache-ttl 300 --level trace

udp-proxy-2020

2023/10/20 07:14:56	stderr	level=debug msg="handlePackets(eth0) ticker"
2023/10/20 07:14:56	stderr	level=debug msg="handlePackets(eth1) ticker"
2023/10/20 07:14:51	stderr	level=debug msg="handlePackets(eth0) ticker"
2023/10/20 07:14:51	stderr	level=debug msg="handlePackets(eth1) ticker"
2023/10/20 07:14:46	stderr	level=debug msg="handlePackets(eth1) ticker"
2023/10/20 07:14:46	stderr	level=debug msg="handlePackets(eth0) ticker"
2023/10/20 07:14:41	stderr	level=debug msg="Initialization complete!"
2023/10/20 07:14:41	stderr	level=debug msg="Opened pcap handle on eth1"
2023/10/20 07:14:41	stderr	level=debug msg="eth1: applying BPF Filter: (udp port 9003) and (src net 192.168.50.0/24)"
2023/10/20 07:14:41	stderr	level=debug msg="Opened pcap handle on eth0"
2023/10/20 07:14:41	stderr	level=debug msg="eth0: applying BPF Filter: (udp port 9003) and (src net 192.168.20.0/24)"
2023/10/20 07:14:41	stderr	level=debug msg="Listen: (main.Listen) {\n iname: (string) (len=4) \"eth1\",\n netif: (*net.Interface)(0xc00001aec0)({\n  Index: (int) 5,\n  MTU: (int) 1500,\n  Name: (string) (len=4) \"eth1\",\n  HardwareAddr: (net.HardwareAddr) (len=6 cap=8452) 00:11:32:e2:50:df,\n  Flags: (net.Flags) up|broadcast|multicast\n }),\n ports: ([]int32) (len=1 cap=1) {\n  (int32) 9003\n },\n ipaddr: (string) (len=14) \"192.168.50.255\",\n promisc: (bool) false,\n handle: (*pcap.Handle)(<nil>),\n writer: (*pcapgo.Writer)(<nil>),\n inwriter: (*pcapgo.Writer)(<nil>),\n outwriter: (*pcapgo.Writer)(<nil>),\n timeout: (time.Duration) 250ms,\n clientTTL: (time.Duration) 0s,\n sendpkt: (chan main.Send) (cap=100) 0xc0000606c0,\n clients: (map[string]time.Time) {\n }\n}\n"
2023/10/20 07:14:41	stderr	level=debug msg="eth1 network: ip+net\t\tstring: 192.168.50.2/24"
2023/10/20 07:14:41	stderr	level=debug msg="eth1: ifIndex: 5"
2023/10/20 07:14:41	stderr	level=debug msg="Listen: (main.Listen) {\n iname: (string) (len=4) \"eth0\",\n netif: (*net.Interface)(0xc00001a840)({\n  Index: (int) 6,\n  MTU: (int) 1500,\n  Name: (string) (len=4) \"eth0\",\n  HardwareAddr: (net.HardwareAddr) (len=6 cap=7256) 00:11:32:e2:50:de,\n  Flags: (net.Flags) up|broadcast|multicast\n }),\n ports: ([]int32) (len=1 cap=1) {\n  (int32) 9003\n },\n ipaddr: (string) (len=14) \"192.168.20.255\",\n promisc: (bool) false,\n handle: (*pcap.Handle)(<nil>),\n writer: (*pcapgo.Writer)(<nil>),\n inwriter: (*pcapgo.Writer)(<nil>),\n outwriter: (*pcapgo.Writer)(<nil>),\n timeout: (time.Duration) 250ms,\n clientTTL: (time.Duration) 0s,\n sendpkt: (chan main.Send) (cap=100) 0xc000060600,\n clients: (map[string]time.Time) {\n }\n}\n"
2023/10/20 07:14:41	stderr	level=debug msg="eth0 network: ip+net\t\tstring: 192.168.20.2/24"
2023/10/20 07:14:41	stderr	level=debug msg="eth0: ifIndex: 6"