{"id":2240,"date":"2014-06-15T18:02:38","date_gmt":"2014-06-15T17:02:38","guid":{"rendered":"http:\/\/kudzia.eu\/b\/?p=2240"},"modified":"2014-08-16T19:27:07","modified_gmt":"2014-08-16T18:27:07","slug":"ovh-failover-ips-vms","status":"publish","type":"post","link":"https:\/\/kudzia.eu\/b\/2014\/06\/ovh-failover-ips-vms\/","title":{"rendered":"OVH, failover IPs, IPv6, VMs"},"content":{"rendered":"<p>at work we rent a dedicated server from OVH; except <a href=\"\/b\/2014\/03\/openvpn-throttled-from-ovhs-bhs-datacenter\/\">unexplained openvpn throttling<\/a> all is working pretty well for the price we pay. besides primary IPv4 address OVH can provide few additional &#8216;failover&#8217; IPv4 addresses and \/64 IPv6 subnet. in our setup some of IPv4s and IPv6s are routed to a KVM VM. below &#8211; description of the configuration details.<br \/>\n<!--more--><\/p>\n<h3>IPv4 setup<\/h3>\n<p>it seems that the additional [called failover] IPv4 addresses are re-routed by the datacenter via our primary IPv4 address. because i have some private traffic flowing between KVM guest, the host server and few LXC containers i did not want to bridge any of the guests directly to the data-center facing eth0 interface. instead i&#8217;ve created two &#8216;host only&#8217; networks: <\/p>\n<ul>\n<li>br0 &#8211; connecting LXC machines with the dummy0 virtual interface on the physical server<\/li>\n<li>br1 &#8211; connecting KVM VM with the dummy1 virtual interface on the physical server<\/li>\n<\/ul>\n<p>the logical setup looks as follows:<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/kudzia.eu\/b\/wp-content\/uploads\/2014\/06\/ovh-bridges-ipv4.png\" alt=\"ovh-bridges-ipv4\" width=\"608\" height=\"511\" class=\"alignnone size-full wp-image-2252\" srcset=\"https:\/\/kudzia.eu\/b\/wp-content\/uploads\/2014\/06\/ovh-bridges-ipv4.png 608w, https:\/\/kudzia.eu\/b\/wp-content\/uploads\/2014\/06\/ovh-bridges-ipv4-300x252.png 300w\" sizes=\"auto, (max-width: 608px) 100vw, 608px\" \/><\/p>\n<p>addresses:<\/p>\n<ul>\n<li>192.95.0.10\/24 is the primary IPv4 assigned by OVH, it&#8217;s using 192.95.0.254 as a default gateway<\/li>\n<li>192.50.128.15\/32 and 142.4.206.19\/32 are additional IPv4 addresses provided by OVH. their routing expects to reach those addresses via 192.95.0.10 which means i could bind them e.g. to the loopback interface of the physical server, or &#8211; as in my case &#8211; i can route them via internal network to KVM VM<\/li>\n<li>on br0:\n<ul>\n<li>192.168.0.1\/24 is bound to the dummy0 &#8211; virtual interface on the physical machine<\/li>\n<li>192.168.0.2\/24, 192.168.0.3\/24 &#8211; are bound to LXC guest machines. those machines can communicate between each other and reach the internet resources via routing and SNAT handled by the physical server<\/li>\n<\/ul>\n<\/li>\n<li>on br1:\n<ul>\n<li>10.0.0.1\/24 is bound to the dummy1 &#8211; another virtual interface on the physical machine<\/li>\n<li>10.0.0.2\/24 is bound to the eth0 of KVM guest. to the same interface i&#8217;ve bound 192.50.128.15 and 142.4.206.19; to make that guest reachable from the internet i&#8217;ve also set up static routing and forwarding on the physical machine telling that both addresses are reachable via 10.0.0.2 on br1<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/ul>\n<p>configuration of the physical machine &#8211; in \/etc\/network\/interfaces:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\nauto eth0\r\niface eth0 inet static\r\n        address 192.95.0.10\/24\r\n        gateway 192.95.0.254\r\n\r\n# dummy interface used to build bridge between LXC guests and the physical machine\r\nauto dummy0\r\niface dummy0 inet manual\r\n\r\nauto br0\r\niface br0 inet static\r\n        address 192.168.0.1\/24\r\n        bridge_stp off\r\n        bridge_fd 0\r\n        bridge_ports dummy0\r\n\r\n# dummy interface used to build bridge between KVM guest and the physical machine\r\nauto dummy1\r\niface dummy1 inet manual\r\n\r\nauto br1\r\niface br1 inet static\r\n        address 10.0.0.1\/30\r\n        bridge_stp off\r\n        bridge_fd 0\r\n        bridge_ports dummy0\r\n        # i re-route further the 'failover' IPv4 addresses to the KVM guest\r\n        post-up \/sbin\/ip r a 142.4.206.19\/32 via 10.0.0.2 dev br1\r\n        post-up \/sbin\/ip r a 198.50.128.15\/32 via 10.0.0.2 dev br1\r\n<\/pre>\n<p>additionally in the startup script \/etc\/rc.local routing is enabled:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\necho 1 &gt; \/proc\/sys\/net\/ipv4\/ip_forward\r\n<\/pre>\n<p>since i want all outgoing internet traffic from KVM to have source ip of 142.4.206.19 &#8211; i&#8217;ve added a nat rule on the KVM guest in \/etc\/rc.local:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\niptables -t nat -A POSTROUTING -o eth0 -d 10.0.0.0\/8 -j RETURN\r\niptables -t nat -A POSTROUTING -o eth0 -d 192.168.0.0\/16 -j RETURN\r\niptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 142.4.206.172\r\n<\/pre>\n<p>at this stage i can already reach the physical server [ 192.95.0.10 ], and KVM VM [ 142.4.206.19, 198.50.128.15 ] from the internet.<\/p>\n<h3>IPv6 setup<\/h3>\n<p>configuration of the IPv6 is less obvious &#8211; OVH does not re-route some subnet via primary address assigned to us. they just provide \/64 network reachable directly via the datacenter-facing eth0. my options of providing v6 connectivity to the KVM VM:<\/p>\n<ul>\n<li>bridge the VM directly to the datacenter-facing network &#8211; initially i wanted to avoid doing it. i wanted to keep the private part of my internal traffic &#8211; well &#8211; private.<\/li>\n<li>ask OVH for additional v6 subnet re-routed via our server &#8211; unsurprisingly that did not work &#8211; they only provide \/64<\/li>\n<li>use <a href=\"http:\/\/forum.ovh.co.uk\/showthread.php?5844-IPv6-with-Proxy-ARP\">arp-proxy mechanism<\/a> for IPv6<\/li>\n<\/ul>\n<p>the last option worked pretty well. i did not have to set up any additional interfaces; i&#8217;ve just added more addressing and configuration rules. new setup:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/kudzia.eu\/b\/wp-content\/uploads\/2014\/06\/ovh-bridges-v6.png\" alt=\"ovh-bridges-v6\" width=\"615\" height=\"511\" class=\"alignnone size-full wp-image-2265\" srcset=\"https:\/\/kudzia.eu\/b\/wp-content\/uploads\/2014\/06\/ovh-bridges-v6.png 615w, https:\/\/kudzia.eu\/b\/wp-content\/uploads\/2014\/06\/ovh-bridges-v6-300x249.png 300w\" sizes=\"auto, (max-width: 615px) 100vw, 615px\" \/><\/p>\n<p>new addresses:<\/p>\n<ul>\n<li>2607:5300:12:2f0a::0\/64 is the IPv6 network designated for my use by OVH<\/li>\n<li>2607:5300:12:2fff:ff:ff:ff:ff\/64 is the default gateway for my subnet provided by OVH<\/li>\n<li>2607:5300:12:2f0a::1\/64 is an arbitrarily chosen IPv6 address out of &#8216;my&#8217; v6 subnet. i intentionally bound 2607:5300:60:2F0a::1\/126 [rather than \/64] to eth0 of the physical server<\/li>\n<li>2607:5300:12:2f0a::11\/124 is the v6 address i&#8217;ve assigned to the dummy1 interface &#8211; part of the br1 bridge between KVM and the physical server<\/li>\n<li>2607:5300:12:2f0a::12\/124 is the v6 address assigned to the KVM guest<\/li>\n<\/ul>\n<p>it&#8217;s all fine and dandy but we have to be able to tell the datacenter&#8217;s router that 2607:5300:12:2f0a::12 can be reached; just setting above addresses up is not enough. the mentioned arp-proxy mechanism solves the problem. additional lines in the \/etc\/network\/interfaces of the physical machine:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\niface eth0 inet6 static\r\n        address 2607:5300:12:2f0a::1\/126\r\n        # for some reason 'normal' way of specifying the default route did not work. maybe because of the intentional mask mismatch?\r\n        post-up \/sbin\/ip -f inet6 route add 2607:5300:12:2Fff:ff:ff:ff:ff dev eth0\r\n        post-up \/sbin\/ip -f inet6 route add default via 2607:5300:12:2fff:ff:ff:ff:ff\r\n\r\niface br1 inet6 static\r\n        address 2607:5300:12:2f0a::11\/124\r\n        # use arp proxy - a trick that lets me make some of the IPv6 addresses available to KVM guest despite the fact \r\n        # that it's not directly bridged to the OVH-facing network\r\n        post-up \/sbin\/ip -6 neigh add proxy 2607:5300:12:2f0a::12 dev eth0\r\n        post-up \/sbin\/ip -6 neigh add proxy 2607:5300:12:2f0a::13 dev eth0\r\n<\/pre>\n<p>additional commands in the startup script:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\necho 1 &gt; \/proc\/sys\/net\/ipv6\/conf\/all\/proxy_ndp\r\necho 1 &gt; \/proc\/sys\/net\/ipv6\/conf\/all\/forwarding\r\necho 1 &gt; \/proc\/sys\/net\/ipv6\/conf\/default\/forwarding\r\n\r\necho 0 &gt; \/proc\/sys\/net\/ipv6\/conf\/all\/autoconf\r\necho 0 &gt; \/proc\/sys\/net\/ipv6\/conf\/all\/accept_ra\r\necho 0 &gt; \/proc\/sys\/net\/ipv6\/conf\/all\/accept_redirects\r\necho 0 &gt; \/proc\/sys\/net\/ipv6\/conf\/all\/router_solicitations\r\n<\/pre>\n<p>kvm guest just requires this in \/etc\/network\/interfaces :<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\niface eth0 inet6 static\r\n        address 2607:5300:12:2f0a::12\/124\r\n        gateway 2607:5300:12:2f0a::11\r\n<\/pre>\n<p>what i described is simplified. on the top of that i have in production:<\/p>\n<ul>\n<li>v4 and v6 packet filtering<\/li>\n<li>SNAT \/ routing for traffic from LXC guests done on the physical server<\/li>\n<li>openvpn tunnel to reach our other sites<\/li>\n<li>gre tunnel to make openvpn work at the speed of tens of megabits rather than kilobits<\/li>\n<\/ul>\n<p><b>edit:<\/b> 2014-08-16 &#8211; i&#8217;ve enabled ipv6 on our internet facing DNS servers [and advertised v6 addresses in the glue records], web server, mail server, client-facing proxies and&#8230; nothing happened. we&#8217;ve started to see some traffic and so far &#8211; no complains. nice!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>at work we rent a dedicated server from OVH; except unexplained openvpn throttling all is working pretty well for the price we pay. besides primary IPv4 address OVH can provide few additional &#8216;failover&#8217; IPv4 addresses and \/64 IPv6 subnet. in our setup some of IPv4s and IPv6s are routed to a KVM VM. below &#8211; [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17],"tags":[83,47,82],"class_list":["post-2240","post","type-post","status-publish","format-standard","hentry","category-tech","tag-ipv6","tag-linux-networking","tag-ovh"],"_links":{"self":[{"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/posts\/2240","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/comments?post=2240"}],"version-history":[{"count":21,"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/posts\/2240\/revisions"}],"predecessor-version":[{"id":2295,"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/posts\/2240\/revisions\/2295"}],"wp:attachment":[{"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/media?parent=2240"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/categories?post=2240"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kudzia.eu\/b\/wp-json\/wp\/v2\/tags?post=2240"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}