OVH, failover IPs, IPv6, VMs

at work we rent a dedicated server from OVH; except unexplained openvpn throttling all is working pretty well for the price we pay. besides primary IPv4 address OVH can provide few additional ‘failover’ IPv4 addresses and /64 IPv6 subnet. in our setup some of IPv4s and IPv6s are routed to a KVM VM. below – description of the configuration details.

IPv4 setup

it seems that the additional [called failover] IPv4 addresses are re-routed by the datacenter via our primary IPv4 address. because i have some private traffic flowing between KVM guest, the host server and few LXC containers i did not want to bridge any of the guests directly to the data-center facing eth0 interface. instead i’ve created two ‘host only’ networks:

  • br0 – connecting LXC machines with the dummy0 virtual interface on the physical server
  • br1 – connecting KVM VM with the dummy1 virtual interface on the physical server

the logical setup looks as follows:
ovh-bridges-ipv4

addresses:

  • 192.95.0.10/24 is the primary IPv4 assigned by OVH, it’s using 192.95.0.254 as a default gateway
  • 192.50.128.15/32 and 142.4.206.19/32 are additional IPv4 addresses provided by OVH. their routing expects to reach those addresses via 192.95.0.10 which means i could bind them e.g. to the loopback interface of the physical server, or – as in my case – i can route them via internal network to KVM VM
  • on br0:
    • 192.168.0.1/24 is bound to the dummy0 – virtual interface on the physical machine
    • 192.168.0.2/24, 192.168.0.3/24 – are bound to LXC guest machines. those machines can communicate between each other and reach the internet resources via routing and SNAT handled by the physical server
  • on br1:
    • 10.0.0.1/24 is bound to the dummy1 – another virtual interface on the physical machine
    • 10.0.0.2/24 is bound to the eth0 of KVM guest. to the same interface i’ve bound 192.50.128.15 and 142.4.206.19; to make that guest reachable from the internet i’ve also set up static routing and forwarding on the physical machine telling that both addresses are reachable via 10.0.0.2 on br1

configuration of the physical machine – in /etc/network/interfaces:

auto eth0
iface eth0 inet static
        address 192.95.0.10/24
        gateway 192.95.0.254

# dummy interface used to build bridge between LXC guests and the physical machine
auto dummy0
iface dummy0 inet manual

auto br0
iface br0 inet static
        address 192.168.0.1/24
        bridge_stp off
        bridge_fd 0
        bridge_ports dummy0

# dummy interface used to build bridge between KVM guest and the physical machine
auto dummy1
iface dummy1 inet manual

auto br1
iface br1 inet static
        address 10.0.0.1/30
        bridge_stp off
        bridge_fd 0
        bridge_ports dummy0
        # i re-route further the 'failover' IPv4 addresses to the KVM guest
        post-up /sbin/ip r a 142.4.206.19/32 via 10.0.0.2 dev br1
        post-up /sbin/ip r a 198.50.128.15/32 via 10.0.0.2 dev br1

additionally in the startup script /etc/rc.local routing is enabled:

echo 1 > /proc/sys/net/ipv4/ip_forward

since i want all outgoing internet traffic from KVM to have source ip of 142.4.206.19 – i’ve added a nat rule on the KVM guest in /etc/rc.local:

iptables -t nat -A POSTROUTING -o eth0 -d 10.0.0.0/8 -j RETURN
iptables -t nat -A POSTROUTING -o eth0 -d 192.168.0.0/16 -j RETURN
iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 142.4.206.172

at this stage i can already reach the physical server [ 192.95.0.10 ], and KVM VM [ 142.4.206.19, 198.50.128.15 ] from the internet.

IPv6 setup

configuration of the IPv6 is less obvious – OVH does not re-route some subnet via primary address assigned to us. they just provide /64 network reachable directly via the datacenter-facing eth0. my options of providing v6 connectivity to the KVM VM:

  • bridge the VM directly to the datacenter-facing network – initially i wanted to avoid doing it. i wanted to keep the private part of my internal traffic – well – private.
  • ask OVH for additional v6 subnet re-routed via our server – unsurprisingly that did not work – they only provide /64
  • use arp-proxy mechanism for IPv6

the last option worked pretty well. i did not have to set up any additional interfaces; i’ve just added more addressing and configuration rules. new setup:

ovh-bridges-v6

new addresses:

  • 2607:5300:12:2f0a::0/64 is the IPv6 network designated for my use by OVH
  • 2607:5300:12:2fff:ff:ff:ff:ff/64 is the default gateway for my subnet provided by OVH
  • 2607:5300:12:2f0a::1/64 is an arbitrarily chosen IPv6 address out of ‘my’ v6 subnet. i intentionally bound 2607:5300:60:2F0a::1/126 [rather than /64] to eth0 of the physical server
  • 2607:5300:12:2f0a::11/124 is the v6 address i’ve assigned to the dummy1 interface – part of the br1 bridge between KVM and the physical server
  • 2607:5300:12:2f0a::12/124 is the v6 address assigned to the KVM guest

it’s all fine and dandy but we have to be able to tell the datacenter’s router that 2607:5300:12:2f0a::12 can be reached; just setting above addresses up is not enough. the mentioned arp-proxy mechanism solves the problem. additional lines in the /etc/network/interfaces of the physical machine:

iface eth0 inet6 static
        address 2607:5300:12:2f0a::1/126
        # for some reason 'normal' way of specifying the default route did not work. maybe because of the intentional mask mismatch?
        post-up /sbin/ip -f inet6 route add 2607:5300:12:2Fff:ff:ff:ff:ff dev eth0
        post-up /sbin/ip -f inet6 route add default via 2607:5300:12:2fff:ff:ff:ff:ff

iface br1 inet6 static
        address 2607:5300:12:2f0a::11/124
        # use arp proxy - a trick that lets me make some of the IPv6 addresses available to KVM guest despite the fact 
        # that it's not directly bridged to the OVH-facing network
        post-up /sbin/ip -6 neigh add proxy 2607:5300:12:2f0a::12 dev eth0
        post-up /sbin/ip -6 neigh add proxy 2607:5300:12:2f0a::13 dev eth0

additional commands in the startup script:

echo 1 > /proc/sys/net/ipv6/conf/all/proxy_ndp
echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
echo 1 > /proc/sys/net/ipv6/conf/default/forwarding

echo 0 > /proc/sys/net/ipv6/conf/all/autoconf
echo 0 > /proc/sys/net/ipv6/conf/all/accept_ra
echo 0 > /proc/sys/net/ipv6/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv6/conf/all/router_solicitations

kvm guest just requires this in /etc/network/interfaces :

iface eth0 inet6 static
        address 2607:5300:12:2f0a::12/124
        gateway 2607:5300:12:2f0a::11

what i described is simplified. on the top of that i have in production:

  • v4 and v6 packet filtering
  • SNAT / routing for traffic from LXC guests done on the physical server
  • openvpn tunnel to reach our other sites
  • gre tunnel to make openvpn work at the speed of tens of megabits rather than kilobits

edit: 2014-08-16 – i’ve enabled ipv6 on our internet facing DNS servers [and advertised v6 addresses in the glue records], web server, mail server, client-facing proxies and… nothing happened. we’ve started to see some traffic and so far – no complains. nice!

Leave a Reply

Your email address will not be published. Required fields are marked *

(Spamcheck Enabled)