Hosting services behind a restrictive firewall/CGNAT using DNAT on a VPS hosted wireguard endpoint
This is a short description of how to host services, using STORJ node as an example, on a host behind CG-NAT, or otherwise restrictive firewall, by forwarding packets through WireGuard endpoint on a relatively fast nearby VPS. This is not specific to Storj, and can be adopted to hosting other services.
Here we will use Oracle Cloud instance to host VPN server; the free tier still provides 10TB of monthly traffic that is sufficient for most node operators. Just make sure to create an account in the closest datacenter to minimize extra latency.
This was originally posted as a GitHub gist, and the firewall-cmd configuration as a follow-up forum post.
- Notes on configuring the cloud instance
- Installing and configuring wireguard tunnel
- Packet forwarding
- Final steps
- History
Notes on configuring the cloud instance
- Create the new Oracle compute instance (ideally, Ampere, because they are awesome, but if that is not available, any other will do too).
- Pick any OS you prefer, here we’ll describe Ubuntu as the most popular one.
- Configure public IP address (this is the default), and upload SSH key to access the instance.
- Then edit the
Ingress Rulesin the Default Security List in the VCN associated with the instance and rules to allow:- Traffic from anywhere
0.0.0.0/0, any port, to destination port28967, one for udp, one for tcp. This is for storj. -
UDP to port
51820, for WireGuard. It does not need to be this specific port, any will do, but adjust the rest accordingly. The source network can also be narrowed down to your ISP’s address range, if desired.Stateless Source IP Protocol Source Port Range Destination Port Range Type and Code Allows Description No 0.0.0.0/0 TCP All 28967 TCP Traffic for port 28967 Storj TCP No 0.0.0.0/0 UDP All 28967 UDP Traffic for port 28967 Storj UDP No 0.0.0.0/0 UDP All 51820 UDP Traffic for port 51820 Wireguard
That’s all that needs to be done in Oracle console.
- Traffic from anywhere
- Optionally, configure the A record to point to the public IP with your DNS provider, to use DNS name and not an ugly IP address in the subsequent configuration and as your storj node external address.
Installing and configuring wireguard tunnel
-
ssh to your new instance, update software, and install wireguard:
sudo apt update && sudo apt upgrade sudo reboot sudo apt install wireguard -y -
Configure wireguard tunnel between your node and VPS. There are tons of tutorials, here are the steps for reference:
On the VPS:
- Initialize the config file
(umask 077 && printf "[Interface]\nPrivateKey= " | sudo tee /etc/wireguard/wg0.conf > /dev/null) wg genkey | sudo tee -a /etc/wireguard/wg0.conf | wg pubkey | sudo tee /etc/wireguard/publickey - Add peer information (public key and address) after configuring it below
- Enable ipv4 forwarding: in
/etc/sysctl.confuncomment# Uncomment the next line to enable packet forwarding for IPv4 net.ipv4.ip_forward=1and for the change to take effect, load it:
sudo sysctl -pNote: it is possible to configure this key in a number of other configuration files, see
man sysctl, but in this case either provide path to file to-pargument or simply usesudo sysctl --system, that will parse all configuration files. - Enable and start the wireguard service:
sudo systemctl enable wg-quick@wg0 sudo systemctl start wg-quick@wg0
On the client, assuming it’s a TrueNAS, and storj runs in the jail, we would need few things:
- In the jail properties tick the
allow_tunflag. (e.g.iocage set allow_tun=1 jailname) - On the host under System → Tunables add
LOADERvariableif_wg_loadwith the valueYES, to load wireguard kernel module. - Initialize the wireguard config file and create keys just like above, noting that in FreeBSD the default configuration file location is
/usr/local/etc/wireguard/wg0.conf - In the jail, in the
/etc/rc.confaddwireguard_enable="YES" wireguard_interfaces="wg0"
Generally, the config files shall look like so:
- On the server:
/etc/wireguard/wg0.conf[Interface] PrivateKey = <server private key> ListenPort = 51820 Address = 10.0.60.1 # Allow WireGuard's own traffic to reach the server. PreUp = iptables -I INPUT -p udp --dport 51820 -j ACCEPT PostDown = iptables -D INPUT -p udp --dport 51820 -j ACCEPT [Peer] PublicKey = <client public key> AllowedIPs = 10.0.60.2/32 - On the client:
/usr/local/etc/wireguard/wg0.conf[Interface] PrivateKey = <client private key> Address = 10.0.60.2 [Peer] PublicKey = <server public key> AllowedIPs = 10.0.60.1/32 Endpoint = sub.example.com:51820 PersistentKeepalive = 25 - Start the service on the client:
service wireguard start
At this point the client shall be able to ping the server, and the server shall be able to ping the client, at
10.0.60.1and10.0.60.2addresses, respectively. - Initialize the config file
Packet forwarding
using iptables
Now the very last thing, the meat of this tutorial. In the [Interface] section on the server in the /etc/wireguard/wg0.conf add the following PreUp and PostDown rules (PostDown rules are copies of PreUp rules, but with -A or -I options replaced with -D, to delete the rule):
# Allow WireGuard's own traffic to reach the server.
PreUp = iptables -I INPUT -p udp --dport 51820 -j ACCEPT
PostDown = iptables -D INPUT -p udp --dport 51820 -j ACCEPT
# Allow incoming Storj connections on the public interface BEFORE they are forwarded.
PreUp = iptables -I INPUT -p tcp --dport 51820 -j ACCEPT
PostDown = iptables -D INPUT -p tcp --dport 51820 -j ACCEPT
PreUp = iptables -I INPUT -p udp --dport 51820 -j ACCEPT
PostDown = iptables -D INPUT -p udp --dport 51820 -j ACCEPT
# Port forward incoming Storj traffic to the VPN client.
PreUp = iptables -t nat -I PREROUTING -i ens3 -p tcp --dport 51820 -j DNAT --to-destination 10.0.60.2:28967
PostDown = iptables -t nat -D PREROUTING -i ens3 -p tcp --dport 51820 -j DNAT --to-destination 10.0.60.2:28967
PreUp = iptables -t nat -I PREROUTING -i ens3 -p udp --dport 51820 -j DNAT --to-destination 10.0.60.2:28967
PostDown = iptables -t nat -D PREROUTING -i ens3 -p udp --dport 51820 -j DNAT --to-destination 10.0.60.2:28967
# Allow the now-forwarded traffic to pass from the public interface to the VPN interface.
PreUp = iptables -I FORWARD -i ens3 -o %i -m state --state RELATED,ESTABLISHED -j ACCEPT
PreUp = iptables -I FORWARD -i ens3 -o %i -p tcp -d 10.0.60.2 --dport 28967 -j ACCEPT
PreUp = iptables -I FORWARD -i ens3 -o %i -p udp -d 10.0.60.2 --dport 28967 -j ACCEPT
PostDown = iptables -D FORWARD -i ens3 -o %i -m state --state RELATED,ESTABLISHED -j ACCEPT
PostDown = iptables -D FORWARD -i ens3 -o %i -p tcp -d 10.0.60.2 --dport 28967 -j ACCEPT
PostDown = iptables -D FORWARD -i ens3 -o %i -p udp -d 10.0.60.2 --dport 28967 -j ACCEPT
# Allow outbound traffic from the VPN client out to the internet
PreUp = iptables -I FORWARD -i %i -o ens3 -j ACCEPT
PostDown = iptables -D FORWARD -i %i -o ens3 -j ACCEPT
# Perform NAT for traffic from the VPN client going to the internet
PreUp = iptables -t nat -I POSTROUTING -s 10.0.60.2/32 -o ens3 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -s 10.0.60.2/32 -o ens3 -j MASQUERADE
These achieve few things:
- Allow traffic to Wireguard port, so that your server can connect to establish the tunnel.
- Allow new tcp and udp connections to Storj port (We are inserting the rule in the very top of the chain; it’s enough to make sure it’s before rule 6, which on oracle instances is reject; so use
-I, not-A). - Port forward incoming Storj traffic to the VPN client.
- Allow the now-forwarded traffic to pass from the public interface to the VPN interface
- Allow outbound traffic from the VPN client out to the internet.
- Perform NAT for traffic from the VPN client going to the internet
On the server, restart the wireguard service:
sudo systemctl restart wg-quick@wg0
On the client, restart the wireguard service:
service wireguard restart
using firewall-cmd
Instead of manually manipulating iptables, everything can be accomplished at a higher level of abstraction, and, arguably, more simply, using firewall-cmd.
In the wireguard config we don’t need any PreUp/PostDn commands:
On the server:
# cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = xxx
ListenPort = 51820
Address = 10.143.221.13/32
[Peer]
PublicKey = yyy
AllowedIPs = 10.143.221.14/32
On the client:
# cat /usr/local/etc/wireguard/oracle.conf
[Interface]
PrivateKey = zzz
Address = 10.143.221.14/32
[Peer]
PublicKey = nnn
AllowedIPs = 0.0.0.0/0
Endpoint = oracle_instance_public_ip:51820
PersistentKeepalive = 25
Note that we are setting AllowedIPs to 0.0.0.0/0. This is to ensure that all traffic originating from the node is routed through the VPN, particularly inadyn traffic that would update DDNS if you were using one. If you do not want this behavior, set only the server’s address in the AllowedIPs list. In this case, there skip enabling masquerading in the steps below.
Edit the configuration at the top, and run the following script.
WG_CLIENT_IP=10.143.221.14
WG_CLIENT_PORT_EXT=28967
WG_CLIENT_PORT_INT=28967
WG_IFACE="wg0"
# On Oracle instances the default external zone is called "public", not "external"
EXT_ZONE="public"
# Allow wireguard connections
firewall-cmd --zone="$EXT_ZONE" --add-port=51820/udp
# Allow node connections:
firewall-cmd --zone="$EXT_ZONE" --add-port=$WG_CLIENT_PORT_EXT/udp
firewall-cmd --zone="$EXT_ZONE" --add-port=$WG_CLIENT_PORT_EXT/tcp
# Enable masquerading on the "$EXT_ZONE" zone oracle has configured. Predefined "external"
# zone would have had that enabled. This is not needed if you don't want to route all traffic
# that originates on the node through the tunnel, see the remark above.
firewall-cmd --zone="$EXT_ZONE" --add-masquerade
# Add forwarding rules for your nodes' port
firewall-cmd --zone="$EXT_ZONE" --add-forward-port="port=$WG_CLIENT_PORT_EXT:proto=tcp:toport=$WG_CLIENT_PORT_INT:toaddr=$WG_CLIENT_IP"
firewall-cmd --zone="$EXT_ZONE" --add-forward-port="port=$WG_CLIENT_PORT_EXT:proto=udp:toport=$WG_CLIENT_PORT_INT:toaddr=$WG_CLIENT_IP"
# add wireguard interface to internal zone
firewall-cmd --zone=internal --add-interface="$WG_IFACE"
# Review:
firewall-cmd --get-active-zones
firewall-cmd --info-zone="$EXT_ZONE"
The Last few commands should output something like this:
# firewall-cmd --get-active-zones
internal
interfaces: wg0
public
interfaces: enp0s3
# firewall-cmd --info-zone=public
public (active)
target: default
icmp-block-inversion: no
interfaces: enp0s3
sources:
services: dhcpv6-client ssh
ports: 51820/udp 28967/udp 28967/tcp
protocols:
forward: no
masquerade: yes
forward-ports:
port=28967:proto=tcp:toport=28967:toaddr=10.143.221.14
port=28967:proto=udp:toport=28967:toaddr=10.143.221.14
source-ports:
icmp-blocks:
rich rules:
Once everything is verified, make the firewall configuration persistent by executing
firewall-cmd --runtime-to-permanent
Note for AlmaLinux and similar
I’ve encountered a handful of installations where this does not work: firewall-cmd refuses to create rules that would allow forwarding from the internal interface to the public.
Despite forward: yes being set on the internal zone, firewalld is still generating restrictive rule that only allows traffic from wg0 to go back out wg0 in the chain (See nft list ruleset:
Specifically:
- the chain
filter_FORWARDjumps to zone specific forwarding rules:jump filter_FORWARD_ZONES - the chain
filter_FORWARD_ZONEShas a rule for traffic fromwg0:iifname "wg0" goto filter_FWD_internal - the chain
filter_FWD_internaljumps tojump filter_FWD_internal_allowand then to default reject if nothing matches. - The
filter_FWD_internal_allowlooks like so:chain filter_FWD_internal_allow { oifname "wg0" accept }It’s essentially saying “if traffic enters wg0 for forwarding, it can only leave via wg0.” This directly contradicts the goal of a VPN server.
I was unable to combat this, even creating an entirely new zone did not help—that pesky rule gets inserted all the time.
The workaround is to add both interfaces into the same zone. There are some security implications to consider—but for our purposes they are not relevant.
firewall-cmd --zone=internal --remove-interface=wg0 --permanent
firewall-cmd --zone=public --add-interface=wg0 --permanent
firewall-cmd --reload
Then the zones look like so, and vpn works:
# firewall-cmd --get-active-zones
public
interfaces: wg0 eth0
# firewall-cmd --info-zone=public
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 wg0
sources:
services: dhcpv6-client ssh
ports: 51820/udp
protocols:
forward: yes
masquerade: yes
forward-ports:
port=28967:proto=tcp:toport=28967:toaddr=10.143.221.14
port=28967:proto=udp:toport=28967:toaddr=10.143.221.14
source-ports:
icmp-blocks:
rich rules:
Final steps
In the config.yaml of the storage node modify the external address to point to your vps:
# the public address of the node, useful for nodes behind NAT
contact.external-address: sub.example.com:28967
[Re]start the node, and check the status page. It shall be now happily connected.
History
| Dec 15, 2022 | initial publication |
| Nov 09, 2024 | described alternative ip configuration using firewall-cmd |
| Jun 04, 2025 | added workaround for stubborn firewall-cmd on AlmaLinux |
| Oct 01, 2025 | Rewritten iptables rules to ensure the originator’s IP address is preserved as seen by the client |