Networking and container issues after watchtower update

New to Docker and trying to learn here. I created several containers using docker-compose using a guide I found including installing Portainer. I have a few quirks I can’t figure out how to solve. My goal is to setup a stack in Portainer this time with all containers and to hopefully have it all work together without the issues I keep having.

  1. Watchtower updates containers and that causes some issues that I can’t work around.
    a. I route traffic of 6 containers through 1 vpn container. When the VPN container updates the other 6 lose the container network setting and I have to redo them all.
    b. the VPN container’s docker_default IP address changes when it updates which causes access issues for routing on my home network.

  2. Randomly, containers will disappear for example Jackett disappeared over the last two days for some reason. I have to recreate it using the docker compose script through Portainer’s custom applications.

I’d like to get the routing part handled first and foremost to have containers properly find the updated VPN container and not lose their settings and also to have a static IP.

And, if this doesn’t solve the random loss of containers on update then I guess debugging advice is also much appreciated.

Thank you!!!

Examples of my docker-compose that made the containers originally:

transmission-vpn:
container_name: transmission-vpn
hostname: transmission
image: haugene/transmission-openvpn
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
restart: always
ports:
- “9091:9091”
- “8888:8888”
- “7878:7878”
- “8989:8989”
- “9117:9117”
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- …

Sonarr – TV Show Download and Management

sonarr:
image: “linuxserver/sonarr”
hostname: sonarr
container_name: “sonarr”
volumes:
- …
restart: always
environment:
- PUID={PUID} - PGID={PGID}
- TZ=${TZ}

You can modify the sonarr conainer to use the VPN container for outbound WAN traffic. In the sonarr container, you can add

  • network=“container:transmission-vpn”

All of the sonarr ports (i.e. 8989) will need to be added to the ‘transmission-vpn’ container which you already have done.

With that in place, other containers can address sonarr via http://host_ip:8989. If the host has a static IP address served up by your DHCP service on your router, this shouldn’t change again.

When the vpn container updates, remember that the other containers such as sonarr are routing through the old container instance. Naturally, they need to be restarted to route through the new container instance.

watchtower claims to support updates to linked containers. Is your docker-compose YAML file containing both vpn and sonarr or are they created separately? If separate, they’re independently updated and the orphan sonarr container is using the old container to route out to the vpn container which no longer exists.

Hi!

Just wanted to chime in with a visual for this. Here’s a video showing multiple containers all running their internet connections through a VPN container as well as how to set it up: https://youtu.be/xbSfaKwyfXE

@apwiggins yeah sadly I was too new to know what was good for me. All were made from a docker compose originally. But, after some issues and loss containers I remade each as they were lost with Portainer’s applications. I’m trying to do a docker compose override (read about it last night), but have some errors I’m not finding in the file that prevent it from working. I’m curious if it’ll take over for the containers that were recreated with portainer or if I need to start over. I don’t want to lose all the customized data inside the containers that are running. Argh!

@davidnburgess Will watch this tonight.

1 Like

@davidnburgess your video is spot on what I was doing. My issues are in line with what @apwiggins brings up I think.

So I guess I need to figure out if I need to delete the Portainer recreated containers I made after they were lost during watchtower updates for some reason, or if a docker compose override will them over again.

If I can get this all created this way hard coded to the compose deployment I think I can avoid the losses and the orphan issue that occurs when the VPN is updated and redeployed which creates those orphans…

Also, I keep getting these errors with the override file but I cannot figure out why for the life of me:

ERROR: yaml.parser.ParserError: while parsing a block mapping
in “./docker-compose.override.yml”, line 187, column 5
expected , but found ‘’
in “./docker-compose.override.yml”, line 226, column 20

Error corresponds to this line:
187 = " image: linuxserver/lidarr"
226 = " network_mode: ‘container:transmission-vpn’" under the Jackett entry.

version: “3.6”

services:

#Portainer - WebUI for Containers
portainer:
image: portainer/portainer-ce
hostname: portainer
container_name: portainer
restart: always
command: -H unix:///var/run/docker.sock
ports:
- “9000:9000”
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/derek/docker/portainer/data:/data
- /home/derek/docker/shared:/shared
environment:
- TZ=America/New_York

Transmission with VPN – Bittorrent Downloader

transmission-vpn:
container_name: transmission-vpn
hostname: transmission
image: haugene/transmission-openvpn
networks:
service1_net:
ipv4_address: 172.22.0.100
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
restart: always
ports:
- “9091:9091”
- “8888:8888”
- “7878:7878”
- “8989:8989”
- “9117:9117”
- “8765:80”
- “5050:5050”
- “6789:6789”
- “8081:8081”
- “8085:8085”
- “8686:8686”
- “5055:5055”
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- /etc/localtime:/etc/localtime:ro
- …
environment:
- OPENVPN_PROVIDER=NORDVPN
- OPENVPN_USERNAME=…
- OPENVPN_PASSWORD=…
- NORDVPN_CATEGORY=legacy_p2p
- NORDVPN_COUNTRY=US
- NORDVPN_PROTOCOL=tcp
- OPENVPN_OPTS=–inactive 3600 --ping 15 --ping-exit 60
- LOCAL_NETWORK=192.168.1.0/24
- PUID=1000
- PGID=996
- TZ=America/New_York
- TRANSMISSION_RPC_AUTHENTICATION_REQUIRED=true
- TRANSMISSION_RPC_HOST_WHITELIST=...
- TRANSMISSION_RPC_PASSWORD=…
- TRANSMISSION_RPC_USERNAME=admin
- TRANSMISSION_UMASK=002
- TRANSMISSION_RATIO_LIMIT=0
- TRANSMISSION_RATIO_LIMIT_ENABLED=true
- TRANSMISSION_SEED_QUEUE_ENABLED=true
- TRANSMISSION_SEED_QUEUE_SIZE=1
- TRANSMISSION_TRASH_ORIGINAL_TORRENT_FILES=true
- TRANSMISSION_ADDED_TORRENTS=true
- TRANSMISSION_BLOCKLIST_URL=…
- TRANSMISSION_BLOCKLIST_ENABLED=true
- TRANSMISSION_BLOCKLIST_UPDATES_ENABLED=true
- TRANSMISSION_ENCRYPTION=2
- TRANSMISSION_UTP_ENABLED=true
- TRANSMISSION_DHT_ENABLED=true
- TRANSMISSION_PEX_ENABLED=true
- TRANSMISSION_INCOMPLETE_DIR=/data/incomplete
- TRANSMISSION_INCOMPLETE_DIR_ENABLED=true
- TRANSMISSION_WATCH_DIR=/data/watch
- TRANSMISSION_WATCH_DIR_ENABLED=true
- TRANSMISSION_DOWNLOAD_DIR=/data/completed/tvshows
- TRANSMISSION_ALT_SPEED_DOWN=7000
- TRANSMISSION_ALT_SPEED_ENABLED=true
- TRANSMISSION_ALT_SPEED_UP=2
- TRANSMISSION_ALT_SPEED_TIME_BEGIN=1360
- TRANSMISSION_ALT_SPEED_TIME_DAY=127
- TRANSMISSION_ALT_SPEED_TIME_ENABLED=true
- TRANSMISSION_ALT_SPEED_TIME_END=540
- TRANSMISSION_SPEED_LIMIT_DOWN=4000
- TRANSMISSION_SPEED_LIMIT_DOWN_ENABLED=true
- TRANSMISSION_SPEED_LIMIT_UP=1
- TRANSMISSION_SPEED_LIMIT_UP_ENABLED=true
- TRANSMISSION_UPLOAD_SLOTS_PER_TORRENT=2
- TRANSMISSION_SCRAPE_PAUSED_TORRENTS_ENABLED=true
- TRANSMISSION_RPC_WHITELIST_ENABLED=false
- TRANSMISSION_RENAME_PARTIAL_FILES=true
- TRANSMISSION_QUEUE_STALLED_MINUTES=10
- TRANSMISSION_QUEUE_STALLED_ENABLED=true
- TRANSMISSION_PREALLOCATION=1
- TRANSMISSION_PREFETCH_ENABLED=true
- TRANSMISSION_MAX_PEERS_GLOBAL=400
- TRANSMISSION_MAX_PEERS_PER_TORRENT=20
- TRANSMISSION_DOWNLOAD_QUEUE_ENABLED=true
- TRANSMISSION_DOWNLOAD_QUEUE_SIZE=60

Transmission’s Proxy

proxy:
image: haugene/transmission-openvpn-proxy
links:
- transmission-vpn
ports:
- 8080:8080
volumes:
- /etc/localtime:/etc/localtime:ro

Overseer - Alternate to Ombi

overseerr:
image: sctx/overseerr
container_name: overseerr
environment:
- LOG_LEVEL=info
- TZ=America/New_York
volumes:
- /home/derek/docker/overseerr:/app/config
restart: unless-stopped
network_mode: ‘container:transmission-vpn’

Radarr – Movie Download and Management

radarr:
image: linuxserver/radarr
hostname: radarr
container_name: radarr
volumes:
- …
- /etc/localtime:/etc/localtime:ro
environment:
- PUID=1000
- PGID=996
- TZ=America/New_York
network_mode: ‘container:transmission-vpn’

Sonarr – TV Show Download and Management

sonarr:
image: linuxserver/sonarr:preview
hostname: sonarr
container_name: sonarr
volumes:
- …
- /etc/localtime:/etc/localtime:ro
restart: always
environment:
- PUID=1000
- PGID=996
- TZ=America/New_York
network_mode: ‘container:transmission-vpn’

#LIDARR - Music Download and Management
lidarr:
image: linuxserver/lidarr
hostname: lidarr
container_name: lidarr
volumes:
- …
- /etc/localtime:/etc/localtime:ro
restart: always
environment:
- PUID=1000
- PGID=996
- TZ=America/New_York
network_mode: 'container:transmission-vpn’

Jackett – Torrent Proxy

jackett:
image: linuxserver/jackett
hostname: jackett
container_name: jackett
volumes:
- …
- /etc/localtime:/etc/localtime:ro
restart: always
environment:
- PUID=1000
- PGID=996
- TZ=America/New_York
network_mode: ‘container:transmission-vpn’

ddclient:
image: ghcr.io/linuxserver/ddclient
container_name: ddclient
environment:
- PUID=1000
- PGID=996
- TZ=America/New_York
volumes:
- /var/docker/ddclient:/config
restart: unless-stopped

Proxy Manager https://smarthomepursuits.com/how-to-install-nginx-proxy-manager-in-docker/

app:
image: ‘jc21/nginx-proxy-manager:latest’
ports:
- ‘80:80’
- ‘81:81’
- ‘443:443’
environment:
DB_MYSQL_HOST: “db”
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: “…” # Change mysql user
DB_MYSQL_PASSWORD: “…” # Change mysql password
DB_MYSQL_NAME: “npm”
volumes:
- /srv/config/nginxproxymanager:/data
- /srv/config/nginxproxymanager/letsencrypt:/etc/letsencrypt
db:
image: ‘jc21/mariadb-aria:10.4’
environment:
MYSQL_ROOT_PASSWORD: ‘…’ # Change mysql user
MYSQL_DATABASE: ‘npm’
MYSQL_USER: ‘…’ # Change mysql user
MYSQL_PASSWORD: ‘…’ # Change mysql user
volumes:
- /srv/config/nginxproxymanager/db:/var/lib/mysql

Watchtower - Automatic Update of Containers/Apps

watchtower:
container_name: watchtower
hostname: watchtower
restart: always
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --schedule “5 8 1,15 * *” --cleanup

networks:
svc1_net:
driver: bridge
ipam:
config:
- subnet: 172.22.0.0/16
- gateway: 172.22.0.1

Thanks for any help/comments.

Are these containers being setup with individual stacks or is there just 1 stack that deploys all the containers?

Edit: If this is being done using a single stack or Docker compose file, can you pose the entirety of that file somewhere online (redacting personal info, of course) on a site like PasteBin so we can see the entire thing as it is in your system rather than being broken up into sections without formatting as on your last forum post? Thanks!

I’ve encountered issues with having to manually restart containers connected to my VPN container as was mentioned that the connected containers are trying to connect to a VPN container that doesn’t exist any longer.

Copy that. So I started out with this crazy docker compose file I found online and edited: https://pastebin.com/vReyFg8R

Later, I had some containers disappear I suppose because of watchtower. So I recreated them in Portainer and they show up as their own stacks right now. Those are Jackett, Lidarr, Radarr, and Sonarr all four of which had their containers disappear on me. Additionally, I deployed ddclient and speedtest using Portainer but was going to bring them into the override yaml and speedtest tests the vpn speeds so makes sense to do so there at least.

I’m trying to update it to this one now: https://pastebin.com/yRpy5Wji

My goal is to have everything together, set to the vpn container, have the vpn container with a static IP, and then pray that watchtower doesn’t cause the orphan situation which requires me to reconfigure every slave container that uses vpn when it redeploys an update.

I’ve been doing docker for like two weeks so I am extremely newb. I know. Thanks for putting up with me as I learn from zero here.

Ugh I just realized you can’t see Pastebin. I get a silly fun error that it can only be published private because of offensive nature. Well that makes for a hell of a time.

Any suggestions on an acceptable way to send your way?

Working off my crappy post above, got a lot farther down the line. I needed network_mode: “container:transmission-vpn” with “ not ‘.

Now I’m stuck on a network related error.

ERROR: The Compose file … is not valid because:
services.transmission-vpn.ports contains an invalid type, it should be a number, or an object
services.transmission-vpn.ipv4_address contains an invalid type, it should be an object, or null

However, all ports are written just as they were in the original working compose file. “port:port”

The ipv4 stuff is new and I found it online and just pasted it all in with the goal of keeping a static IP that way my nginx proxy manager for overseerr doesn’t need updated each time the vpn is updated and receives a new dhcp address from docker.

Allright. I just rebuilt everything. Original docker compose couldn’t contain hostnames but couldn’t do the net container since the containers weren’t there yet. I did an override yml with the net container: and now we are totally up and running. Amazing stuff. Beer at 11am? Sure.