135 lines
No EOL
7.1 KiB
Text
135 lines
No EOL
7.1 KiB
Text
#summary CORE HOWTO and references for Linux network namespaces
|
|
|
|
= Introduction =
|
|
|
|
Linux network namespaces (netns) is a lightweight container-based virtualization that is part of the mainline 2.6.27+ Linux kernel. A virtual network stack can be associated with a process group. This is similar to the FreeBSD jail mechanism.
|
|
|
|
Each namespace has its own loopback device and process space. Virtual or real devices can be added to each network namespace, and you can assign IP addresses to these devices and use them as a network node. By default these network namespaces share the same filesystem, just like CORE nodes in FreeBSD. Netns does not have the same security and resource restrictions as OpenVZ containers, and do not require a separate OS template.
|
|
|
|
You do not need to patch your kernel in order to use network namespaces. Modern distros such as Ubuntu 10.04 and Fedora 13 have netns support turned on in their default kernels.
|
|
|
|
= Command-line Example =
|
|
|
|
Below is an example of building two virtual nodes from the command-line, and connecting them with a bridge (a wired network in CORE.) The `vnoded` and `vcmd` commands are provided by CORE. See the Python script examples for doing this at a higher level with Python scripting.
|
|
|
|
{{{
|
|
#!/bin/sh
|
|
# Below is a transcript of creating two emulated nodes and connecting them
|
|
# together with a wired link. You can run the core-cleanup.sh script to clean
|
|
# up after this script.
|
|
|
|
# create node 1 namespace container
|
|
vnoded -c /tmp/n1.ctl -l /tmp/n1.log -p /tmp/n1.pid
|
|
# create a virtual Ethernet (veth) pair, installing one end into node 1
|
|
ip link add name n1.0.1 type veth peer name n1.0
|
|
ip link set n1.0 netns `cat /tmp/n1.pid`
|
|
vcmd -c /tmp/n1.ctl -- ip link set n1.0 name eth0
|
|
vcmd -c /tmp/n1.ctl -- ifconfig eth0 10.0.0.1/24
|
|
|
|
# create node 2 namespace container
|
|
vnoded -c /tmp/n2.ctl -l /tmp/n2.log -p /tmp/n2.pid
|
|
# create a virtual Ethernet (veth) pair, installing one end into node 2
|
|
ip link add name n2.0.1 type veth peer name n2.0
|
|
ip link set n2.0 netns `cat /tmp/n2.pid`
|
|
vcmd -c /tmp/n2.ctl -- ip link set n2.0 name eth0
|
|
vcmd -c /tmp/n2.ctl -- ifconfig eth0 10.0.0.2/24
|
|
|
|
# bridge together nodes 1 and 2 using the other end of each veth pair
|
|
brctl addbr b.1.1
|
|
brctl setfd b.1.1 0
|
|
brctl addif b.1.1 n1.0.1
|
|
brctl addif b.1.1 n2.0.1
|
|
ip link set n1.0.1 up
|
|
ip link set n2.0.1 up
|
|
ip link set b.1.1 up
|
|
|
|
# display connectivity and ping from node 1 to node 2
|
|
brctl show
|
|
vcmd -c /tmp/n1.ctl -- ping 10.0.0.2
|
|
}}}
|
|
|
|
|
|
= Old Instructions for Building from Source =
|
|
CORE 4.0 supports network namespaces now. You can now follow the instructions in the manual or the [Quickstart] page. *Below are old instructions for using the development snapshot:*
|
|
|
|
# install Ubuntu 10.04 or 9.10, or Fedora 13 or 12 (namespace support is built-in!) If you already have Linux installed you can check [NamespaceKernels your kernel version] to see if it supports network namespace virtualization:
|
|
{{{
|
|
# Linux kernel version: is this number >= 2.6.27?
|
|
$ uname -r
|
|
2.6.32-22-generic
|
|
}}}
|
|
# make sure your system is up to date (fresher kernels are generally better):
|
|
{{{
|
|
# for Ubuntu: you can also use synaptic or
|
|
# update-manager instead of apt-get update/dist-upgrade
|
|
sudo apt-get update
|
|
sudo apt-get dist-upgrade
|
|
# for Fedora:
|
|
yum update
|
|
}}}
|
|
# install the packages required to compile CORE:
|
|
{{{
|
|
# for Ubuntu:
|
|
sudo apt-get install bash bridge-utils ebtables iproute \
|
|
libev3 libtk-img python tcl tk xterm autoconf \
|
|
automake gcc libev-dev libtool make pkg-config \
|
|
python2.6-dev libreadline-dev
|
|
# for Fedora:
|
|
yum install autoconf automake bash bridge-utils ebtables \
|
|
gcc libev-devel libtool make pkgconfig python-devel \
|
|
readline-devel sudo tcl tk tkimg urw-fonts xauth \
|
|
xorg-x11-server-utils xterm
|
|
}}}
|
|
# Fedora only: turn off SELINUX by setting `SELINUX=disabled` in `/etc/sysconfig/selinux`; turn off iptables and ip6tables firewalls (`chkconfig iptables off`,`chkconfig ip6tables off`) or you will need to configure permissive rules for CORE; turn on IP forwarding in `/etc/sysctl.conf` (`net.ipv4.ip_forward = 1`, `net.ipv6.conf.all.forwarding = 1`)
|
|
# install Quagga; we recommend [http://downloads.pf.itd.nrl.navy.mil/ospf-manet/ quagga-0.99.16-mr1.0] for the OSPF MANET models. If building from source on your host system, you should configure it to use state directories with CORE:
|
|
{{{
|
|
./configure --enable-user=root --enable-group=root --with-cflags=-ggdb \
|
|
--sysconfdir=/usr/local/etc/quagga --enable-vtysh \
|
|
--localstatedir=/var/run/quagga
|
|
}}}
|
|
# grab the development snapshot from [http://downloads.pf.itd.nrl.navy.mil/core/source/nightly_snapshots/core-svnsnap.tgz core-svnsnap.tgz]
|
|
# unpack and build:
|
|
{{{
|
|
tar xzf core-svnsnap.tgz
|
|
cd core
|
|
./bootstrap.sh
|
|
./configure ;# this command should report "Linux Namespaces emulation: yes"
|
|
make
|
|
sudo make install
|
|
}}}
|
|
# after installing, start the CORE services: `sudo /etc/init.d/core start`
|
|
# run the CORE GUI: `core`
|
|
|
|
The containers created will access the common host filesystem, except that some directories are private on a per-container basis; these typically contain per-node configuration files, pids, etc. Notice in the quagga configuration above that the directories /usr/local/etc/quagga and /var/run/quagga hold the private state information for quagga. Other applications may need their own private directories, so you will need to manually edit some configuration scripts to enable this. The below tips were suggested on the mailing list:
|
|
|
|
For the Linux netns version you can use the privatedir() method of LxcNode (see vnode.py). The default per-node private directories are self.confdir (/usr/local/etc/quagga) and /var/run, and also /tmp.
|
|
|
|
There is not a simple way to add other directories from the CORE GUI or with API messages. One approach is to create a subclass of LxcNode that CoreNode inherits from that calls privatedir() during startup() or boot(). See CtrlIfLxcNode in pycore_nodes.py for a similar example that adds a control network interface to nodes and runs sshd (useful for X11 forwarding).
|
|
|
|
Another option is to specify a custom startup script that runs a daemon using a conf, log, pid files that are specified via command-line parameters (e.g. 'sshd -f /tmp/n3_sshd_config')
|
|
|
|
Note that all features may not be available in this network namespaces version of CORE. We are working to make the system more modular and preserve functionality across versions.
|
|
|
|
= References =
|
|
|
|
Linux Containers SourceForge project page:
|
|
|
|
* http://lxc.sourceforge.net
|
|
|
|
IBM Linux container tools tutorial:
|
|
|
|
* http://www.ibm.com/developerworks/linux/library/l-lxc-containers/
|
|
|
|
|
|
Other helpful sites:
|
|
|
|
* http://en.opensuse.org/LXC
|
|
* http://wiki.archlinux.org/index.php/Linux_Containers
|
|
* http://nigel.mcnie.name/blog/a-five-minute-guide-to-linux-containers-for-debian
|
|
* http://sunoano.name/ws/public_xhtml/linux_containers.html
|
|
* http://sysadmin-cookbook.rot13.org/#lxc
|
|
|
|
Mailing lists:
|
|
* Introduction Jan 2007: http://lwn.net/Articles/219597/
|
|
* this message discusses frustrations OpenVZ users might have with non-Redhat host distros: http://openvz.org/pipermail/users/2010-January/003190.html
|
|
* discussion of LXC differences vs OpenVZ and Linux-VServer: http://openvz.org/pipermail/users/2010-January/003192.html |