Nginx transparent proxy

use nginx as a transparent proxy to terminate ssl traffic either locally or remotely

July 15, 2020
sysadmin nginx proxy load-balancer self hosting

So you want to self host part 2 ?

So you started hosting stuff at home, you either followed part 1 to use a wireguard vpn to get a public ip to your server at home, or came here because you did not find too much documentation on Nginx proxy_bind $remote_addr transparent mode and wants an example (it is called IP transparency in the doc ).

In this article, I’ll outline quickly how to use nginx as a transparent proxy to send the remote ip address to an upstream server, using the stream module ( not the http module and X-Forwarded-For header ).

That can allow you to load balance whatever protocol you want, preserving the client IP for your logs or other purposes.

As an added bonus, and as an example, I’ll show you how to terminate an https connection either locally on the LB or on the upstream server.

Prerequisites:

  • 1 server ‘load-balancer’ LB 10.100.100.1
  • 1 server ‘upstream’ UP 10.100.100.20

Global steps:

  • Install/configure nginx on LB
  • Configure iptables and ip rules on LB
  • install/configure nginx on UP ( <— or whatever you want to serve using the nginx proxy… )
  • make sure LB is the default route on UP ( already the case if you followed the wireguard tutorial ).

On the cheap vps LB:

ip_routes.sh (you can just add those lines in the ‘up’ section of the tunnel.sh of the wireguard article)):

#!/bin/bash

# traffic coming back from UP should be routed back to nginx for relay
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
iptables -t mangle -I PREROUTING 1 -p tcp -s 10.100.100.20 --sport 443 -j MARK --set-xmark 0x1/0xffffffff
# traffic coming back from local backend should also be routed back to nginx for relay
# do not use 127.0.0.1 for that, it won't work, use your local vpn ip it is easier ( or any other btw )
iptables -t mangle -I PREROUTING 1 -p tcp -s 10.100.100.1 --sport 4430 -j MARK --set-xmark 0x1/0xffffffff
iptables -t mangle -I OUTPUT 1 -p tcp -s 10.100.100.1 --sport 4430 -j MARK --set-xmark 0x1/0xffffffff

run:

mkdir -p /etc/nginx/stream.d && echo 'include /etc/nginx/stream.d/*;' >> /etc/nginx/nginx.conf

/etc/nginx/stream.d/default:

stream {
    map $ssl_preread_server_name $upstream {
        terminate-on-LB.mydomain.com local;
        default default;
   }

    upstream local {
       server 10.100.100.1:4430;
    }

    upstream default {
        server 10.100.100.20:443;
    }

    server {
        listen 443;
        ssl_preread on;
        proxy_pass $upstream;
        proxy_bind $remote_addr transparent;
    }
}

/etc/nginx/sites-enabled/terminate-on-LB.mydomain.com:

server {
    server_name terminate-on-LB.mydomain.com

    location / {
      ...
    }

    listen 4430 ssl http2;
    ssl_certificate ....
    ssl_certificate_key ....

}

On the UP server:

just configure nginx to have something listening on port 443 for this example.

Last steps:

  • launching ip_routes.sh on the LB is left for you to do as you see fit.
  • make sure LB is the default route for UP.

Conclusion:

Easy to use, and allows you to have your status page on your vps while the rest is hosted at home, so you can be warned in case of trouble :) All domains not explicitely set in the map $ssl_preread_server_name $upstream on the LB will be sent to the nginx server on UP. Also, client IP being preserved, you can have your fail2ban, or whatever working properly.