fbpx

OpenVZ: Bridged IPv6 subnets

I’ve been working on a gre tunneling interface for a while, but had wishes to make my OpenVZ host take care of the services that should host the tunnel – like for example, instead of assigning each single IPv6-address manually via vzctl, addressing should be handled from the container. And as long as vzctl and the venet-interfaces is used, it has to be done this way. With OpenVZ this is not entirely obvious, since documentation is not always collected in the same place.

As a matter of fact, after searching half of the day, I think I’ve got it covered. First, make sure you’re not using vzctl –ipadd, when you’re adding a larger subnet. Let’s use an example:

vzctl set <ctid> ---ipadd 2a01:299:a0:7000::/64

The example above will only assign one ip – 2a01:299:a0:7000:: – to your container, not the entire subnet. To have more addresses in this case, you have to make vzctl set them up in the same way: 2a01:299:a0:7000::1/64, 2a01:299:a0:7000::2/64, etc. The real magic occurs when you’re starting to use veth and brctl correct. To make it quick:

vzctl set <ctid> --netif_add eth0 --save

# Find the right veth-interface and ...
brctl addif br0 veth-interface

In the OpenVZ release I use, the bridging is set up by linking br0 with the created vethinterface – how to identify this interface when having more containers than one is currently undicovered ground as, again, documents are not very clear on this. I’ve found names like veth101.0 have been used, but in my case – with Virtuozzo 7.x – I get interfaces like veth123a4bcd, and they are a bit hard do identify and connect to the right containter. This should be automatically fixed with scripts that is running during the openvz startup sequences, but that part is still undiscovered.

Instead, I’ve created a cron-job that makes sure that all the interfaces are really linked up after the server boot (which is probably a high security issue if you host many containers for many users as bridging opens up networks a bit more than you probably wish):

#!/bin/bash PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin 

veth=$(ifconfig|grep veth|sed 's/:/ /'|awk '{printf $1 " "}')
echo "Bridging VE interfaces..."

for interface in $veth
do
hasInterface=$(brctl show | grep ${interface})
if [ "" = "$hasInterface" ] ; then
echo brctl addif br0 ${interface}
brctl addif br0 ${interface
fi
done

With this little ugly one, we’re making sure that unbridged interfaces are really bridged, and only once, so brctl does not need to rerun creation all the time. I think there are much better ways of doing this, however. The last thing to do after this setup is to actually assign the subnets properly. On the host server:

ip route add <A:B:C:D::/64> dev br0

At the container:

#!/bin/bash

# Local bridge
ip -6 addr add A:B:C:D::/64 dev eth0

# Gateway
ip -6 route add <gateway> dev eth0

# Do not route ipv6 via venet0
ip -6 route del default dev venet0

# Route default via this router
ip -6 route add default via <gateway> dev eth0

# Route ipv6 via eth0
ip -6 route add default via <gateway> dev eth0

New updates

On newer openvz-releases neither tun/tap nor gre-tunneling seem to be a problem anymore. However, SIT remains impossible to run. Most comments on the internet very much likely are old, linking to userspace applications that has to be compiled or a note that just says: ”You have to compile into the kernel as it is disable per default. Security issues”. Many people also say that IF you’re running SIT you should probably run most of the data on the host of the server. With no further examples on HOW they mean. The below example IS a live example even if the solution itself – as always – does not work proberly. However, by running those commands I managed to see life from the tunnel server that was in this case based on Hurricane Electric tunnels. The solution also has a static ip address that HE can reach.

The only problem with the example below is the fact that i still struggle with the most common error of them all: protocol 41 port 0 unreachable. The connection itself also ”demands” that you are not using the venet-links. The example tries to connect with HE Fremont.

vzctl set 7030 –netdev_del he-fremont –save vzctl restart 7030 vzctl exec 7030 ip addr add dev eth0 ip tunnel add he-fremont mode sit remote 72.52.104.74 local any ttl 255 dev vzctl set 7030 –netdev_add he-fremont –save vzctl exec 7030 ip link set he-fremont up vzctl exec 7030 ip tun change he-fremont local vzctl exec 7030 ip addr add dev he-fremont vzctl exec 7030 ip route add ::/0 dev he-fremont vzctl exec 7030 ip addr add dev eth0 vzctl exec 7030 ip -f inet6 addr

Trying to ping the addresses added above will make tcpdump react. There is something going on. But since protocol 41 fails here, everything stays with this. If someone has a fantastic solution on how to activate ipv6/protocol 41 on a ve please tell!


Upptäck mer från Tornevall

Prenumerera för att få de senaste inläggen skickade till din e-post.

You may also like