How I got Roon working over OpenVPN (hard for me, easy for you)

Ok.

Sorry, there was a lot of information in here, most of it confusing. There were a bunch of conflicting debug options in there, so I had to delete them. Whether we have enough information now, I don’t know. But now the output of the shell script (when I echo it) is:

# podman run --log-level=debug -i -d --rm --net=host --name udp-proxy-2020 -e PORTS=9003 -e INTERFACES=br0,tun1 -e TIMEOUT=250 -e CACHETTL=90 -e EXTRA_ARGS= synfinatic/udp-proxy-2020:latest

And when I run the script with no echo (not noecho, but no echo), I get this:

# /mnt/data/on_boot.d/40-udp-proxy-2020.sh
DEBU[0000] using conmon: "/usr/libexec/podman/conmon"   
DEBU[0000] Initializing boltdb state at /mnt/data/podman/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver                           
DEBU[0000] Using graph root /mnt/data/podman/storage    
DEBU[0000] Using run root /var/run/containers/storage   
DEBU[0000] Using static dir /mnt/data/podman/storage/libpod 
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path /mnt/data/podman/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] overlay: imagestore=/var/lib/containers/storage 
DEBU[0000] overlay: skip_mount_home=false               
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is not being used 
DEBU[0000] cached value indicated that native-diff is usable 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
INFO[0000] [graphdriver] using prior storage driver: overlay 
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/10-libpod.conflist 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]docker.io/synfinatic/udp-proxy-2020:latest" 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] exporting opaque data as blob "sha256:914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] exporting opaque data as blob "sha256:914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] Using host netmode                           
DEBU[0000] setting container name udp-proxy-2020        
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 1 for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 
DEBU[0000] parsed reference into "[overlay@/mnt/data/podman/storage+/var/run/containers/storage:.imagestore=/var/lib/containers/storage,.skip_mount_home=false,.mountopt=nodev]@914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] exporting opaque data as blob "sha256:914d21726ae409331c611904b8ac8977266770645c2117e9c2d06bc8fea4412b" 
DEBU[0000] created container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" 
DEBU[0000] container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" has work directory "/mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata" 
DEBU[0000] container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" has run directory "/var/run/containers/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata" 
DEBU[0000] New container created "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" 
DEBU[0000] container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" has CgroupParent "/libpod_parent/libpod-1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" 
DEBU[0000] overlay: mount_data=nodev,lowerdir=/mnt/data/podman/storage/overlay/l/N7XV33BVS5VM4IC4KLWJO3BRSF:/mnt/data/podman/storage/overlay/l/RX3ZB32DEEMUSS5AEID7WS23VA:/mnt/data/podman/storage/overlay/l/3FZIP53NLCOPDJ2TG2QBFCN5PN,upperdir=/mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/diff,workdir=/mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/work 
DEBU[0000] mounted container "1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6" at "/mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/merged" 
DEBU[0000] Created root filesystem for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 at /mnt/data/podman/storage/overlay/c4c8771b767224f4face6c743e51ce9ac9c9a2044eac84f302c393af1539b115/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroup path for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 to /libpod_parent/libpod-1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 at /mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/config.json 
DEBU[0000] /usr/libexec/podman/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/libexec/podman/conmon    args="[--api-version 1 -c 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 -u 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 -r /usr/bin/runc -b /mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata -p /var/run/containers/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/pidfile -l k8s-file:/mnt/data/podman/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -i --conmon-pidfile /var/run/containers/storage/overlay-containers/1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /mnt/data/podman/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-opt --exit-command-arg .imagestore=/var/lib/containers/storage --exit-command-arg --storage-opt --exit-command-arg .skip_mount_home=false --exit-command-arg --storage-opt --exit-command-arg .mountopt=nodev --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus: open /sys/fs/cgroup/libpod_parent/conmon/cpuset.cpus.effective: no such file or directory 
DEBU[0000] Received: 14403                              
INFO[0000] Got Conmon PID as 14386                      
DEBU[0000] Created container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 in OCI runtime 
DEBU[0000] Starting container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 with command [/bin/sh -c /usr/local/bin/udp-proxy-2020 --port $PORTS     --interface $INTERFACES --timeout $TIMEOUT     --cache-ttl $CACHETTL $EXTRA_ARGS] 
DEBU[0000] Started container 1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6 
1e0372f9a44ba23e5bd3b0352b94cef35347330f783c1a46ebabbd3936726fc6

And lo and behold, that results in a running container!

# podman ps
CONTAINER ID  IMAGE                                       COMMAND               CREATED        STATUS            PORTS  NAMES
1e0372f9a44b  docker.io/synfinatic/udp-proxy-2020:latest  /bin/sh -c /usr/l...  4 seconds ago  Up 4 seconds ago         udp-proxy-2020
f3f508478663  localhost/unifi-os:latest                   /sbin/init            3 weeks ago    Up 2 hours ago           unifi-os

Now, I have no idea if it’s actually doing anything - but I have reliably figured out vi and how to use it well enough to figure out that it was all the debug options that were killing me. At this point it’s repeatable - I can kill that container and relaunch it using the script.

And the image is the correct one, which I found by learning the --digests option (the f35c82df serial matches the one that’s on the docker page):

# podman images --digests
ERRO[0000] error checking if image is a parent "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": error reading image "4a642e78ff5649447532caefa78aea05858ca4bbed11e44be2858704660165ec": layer not known 
REPOSITORY                            TAG       DIGEST                                                                    IMAGE ID       CREATED         SIZE                       R/O
localhost/unifi-os                    latest    sha256:877043d617062e5e1171151f6675f7a201a055f61d5a10447e6271c8faf2bebe   be4d69174995   7 weeks ago     1.82 GB                    true
localhost/unifi-os                    default   sha256:877043d617062e5e1171151f6675f7a201a055f61d5a10447e6271c8faf2bebe   be4d69174995   7 weeks ago     1.82 GB                    true
docker.io/synfinatic/udp-proxy-2020   latest    sha256:f35c82dfbdacfeb62b1e0dea5e8d9bb98014d21b22353150d0790daf699c8eeb   914d21726ae4   7 months ago    12.9 MB                    false
localhost/unifi-os                    current   sha256:4c84adfc305424cfbb0ca4f2cbffdbccb8a9b12152a39e0ee1a6820ac2dfd57f   4a642e78ff56   10 months ago   unable to determine size   false

Now @Aaron_Turner what should I do to see if I can determine if this is actually doing what it’s supposed to?

EDIT: HOLY CRAP! It is doing something right… I got it to work over Teleport (which is Unifi’s white-labeled WireGuard). I added tlprt0 to the interfaces line in the config, killed and relaunched, and then I was able to (a) over teleport (with WiFi off/over cellular) see my iPhone in the list of zones from the Roon remote even though it is a private zone, and (b) successfully play to it! So that’s something! It’s not what I came here trying to do, but I feel pretty psyched that I did it as a side-effect!


^^ Note there’s no wi-fi symbol there - I’m accessing over cellular, and I will note again that I feel like a total badass.

And… in the perhaps good news department, once I disconnect my iPhone from Teleport, the interface must disappear, because I get lines like this in podman logs cbd1e4adc7ae, I assume that they won’t build up forever?

level=warning msg="Unable to send 132 bytes from br0 out tlprt0: send: No such device or address"

Now I just don’t know if it’s also broadcasting to tun0 - I can’t see my remote home zones yet, even though I think that I have a functional version of udp-proxy-2020 working on my USG.