initial commit removing all related xen code, docs, files

This commit is contained in:
Blake J. Harnden 2018-03-07 15:47:14 -08:00
parent 940e10ef5e
commit a5370ee28c
39 changed files with 80 additions and 1771 deletions

View file

@ -28,7 +28,7 @@ ACLOCAL_AMFLAGS = -I config
# extra files to include with distribution tarball # extra files to include with distribution tarball
EXTRA_DIST = bootstrap.sh LICENSE \ EXTRA_DIST = bootstrap.sh LICENSE \
README.md ASSIGNMENT_OF_COPYRIGHT.pdf \ README.md ASSIGNMENT_OF_COPYRIGHT.pdf \
README-Xen Changelog kernel \ Changelog kernel \
python-prefix.py revision.sh \ python-prefix.py revision.sh \
.version .version.date \ .version .version.date \
packaging/deb/compat \ packaging/deb/compat \

View file

@ -1,87 +0,0 @@
CORE Xen README
This file describes the xen branch of the CORE development tree which enables
machines based on Xen domUs. When you edit node types, you are given the option
of changing the machine type (netns, physical, or xen) and the profile for
each node type.
CORE will create each domU machine on the fly, having a bootable ISO image that
contains the root filesystem, and a special persitent area (/rtr/persistent)
using a LVM volume where configuration is stored. See the /etc/core/xen.conf
file for related settings here.
INSTALLATION
1. Tested under OpenSUSE 11.3 which allows installing a Xen dom0 during the
install process.
2. Create an LVM volume group having enough free space available for CORE to
build logical volumes for domU nodes. The name of this group is set with the
'vg_name=' option in /etc/core/xen.conf. (With 256M per persistent area,
10GB would allow for 40 nodes.)
3. To get libev-devel in OpenSUSE, use:
zypper ar http://download.opensuse.org/repositories/X11:/windowmanagers/openSUSE_11.3 WindowManagers
zypper install libev-devel
4. In addition to the normal CORE dependencies
(see http://code.google.com/p/coreemu/wiki/Quickstart), pyparted-3.2 is used
when creating LVM partitions and decorator-3.3.0 is a dependency for
pyparted. The 'python setup.py install' and 'make install' need to be
performed on these source tarballs as no packages are available.
tar xzf decorator-3.3.0.tar.gz
cd decorator-3.3.0
python setup.py build
python setup.py install
tar xzf pyparted-3.2.tar.gz
cd pyparted-3.2
./configure
make
make install
5. These Xen parameters were used for dom0, by editing /boot/grub/menu.lst:
a) Add options to "kernel /xen.gz" line:
gnttab_max_nr_frames=128 dom0_mem=1G dom0_max_vcpus=2 dom0_vcpus_pin
b) Make Xen default boot by editing the "default" line with the
index for the Xen boot option. e.g. change "default 0" to "default 2"
Reboot to enable the Xen kernel.
5. Run CORE's ./configure script as root to properly discover sbin binaries.
tar xzf core-xen.tgz
cd core-xen
./bootstrap.sh
./configure
make
make install
6. Put your ISO images in /opt/core-xen/iso-files and set the "iso_file="
xen.conf option appropriately.
7. Uncomment the controlnet entry in /etc/core/core.conf:
# establish a control backchannel for accessing nodes
controlnet = 172.16.0.0/24
This setting governs what IP addresses will be used for a control channel.
Given this default setting the host dom0 will have the address 172.16.0.254
assigned to a bridge device; domU VMs will get interfaces joined to this
bridge, having addresses such as 172.16.0.1 for node n1, 172.16.0.2 for n2,
etc.
When 'controlnet =' is unspecified in the core.conf file, double-clicking on
a node results in the 'xm console' method. A key mapping is set up so you
can press 'F1' then 'F2' to login as root. The key ctrl+']' detaches from the
console. Only one console is available per domU VM.
8. When 'controlnet =' is specified, double-clicking on a node results in an
attempt to ssh to that node's control IP address.
Put a host RSA key for use on the domUs in /opt/core-xen/ssh:
mkdir -p /opt/core-xen/ssh
ssh-keygen -t rsa -f /opt/core-xen/ssh/ssh_host_rsa_key
cp ~/.ssh/id_rsa.pub /opt/core-xen/ssh/authorized_keys
chmod 600 /opt/core-xen/ssh/authorized_keys

View file

@ -309,7 +309,6 @@ AC_CONFIG_FILES([Makefile
scripts/Makefile scripts/Makefile
scripts/core-daemon.service scripts/core-daemon.service
scripts/perf/Makefile scripts/perf/Makefile
scripts/xen/Makefile
doc/Makefile doc/Makefile
doc/conf.py doc/conf.py
doc/man/Makefile doc/man/Makefile

View file

@ -18,14 +18,12 @@ SBIN_FILES = \
sbin/core-cleanup \ sbin/core-cleanup \
sbin/core-daemon \ sbin/core-daemon \
sbin/core-manage \ sbin/core-manage \
sbin/core-xen-cleanup \
sbin/coresendmsg sbin/coresendmsg
dist_sbin_SCRIPTS = $(SBIN_FILES) dist_sbin_SCRIPTS = $(SBIN_FILES)
CONF_FILES = \ CONF_FILES = \
data/core.conf \ data/core.conf
data/xen.conf
coreconfdir = $(CORE_CONF_DIR) coreconfdir = $(CORE_CONF_DIR)
dist_coreconf_DATA = $(CONF_FILES) dist_coreconf_DATA = $(CONF_FILES)

View file

@ -618,9 +618,6 @@ class CoreRequestHandler(SocketServer.BaseRequestHandler):
model = message.get_tlv(NodeTlvs.MODEL.value) model = message.get_tlv(NodeTlvs.MODEL.value)
class_args = {"start": start} class_args = {"start": start}
if node_type == NodeTypes.XEN.value:
class_args["model"] = model
if node_type == NodeTypes.RJ45.value and hasattr( if node_type == NodeTypes.RJ45.value and hasattr(
self.session.options, "enablerj45") and self.session.options.enablerj45 == "0": self.session.options, "enablerj45") and self.session.options.enablerj45 == "0":
class_args["start"] = False class_args["start"] = False
@ -639,7 +636,7 @@ class CoreRequestHandler(SocketServer.BaseRequestHandler):
# add services to a node, either from its services TLV or # add services to a node, either from its services TLV or
# through the configured defaults for this node type # through the configured defaults for this node type
if node_type in [NodeTypes.DEFAULT.value, NodeTypes.PHYSICAL.value, NodeTypes.XEN.value]: if node_type in [NodeTypes.DEFAULT.value, NodeTypes.PHYSICAL.value]:
if model is None: if model is None:
# TODO: default model from conf file? # TODO: default model from conf file?
model = "router" model = "router"

View file

@ -69,7 +69,6 @@ class NodeTypes(Enum):
""" """
DEFAULT = 0 DEFAULT = 0
PHYSICAL = 1 PHYSICAL = 1
XEN = 2
TBD = 3 TBD = 3
SWITCH = 4 SWITCH = 4
HUB = 5 HUB = 5

View file

@ -8,13 +8,11 @@ from core.enumerations import NodeTypes
from core.netns import nodes from core.netns import nodes
from core.netns.vnet import GreTapBridge from core.netns.vnet import GreTapBridge
from core.phys import pnodes from core.phys import pnodes
from core.xen import xen
# legacy core nodes, that leverage linux bridges # legacy core nodes, that leverage linux bridges
NODES = { NODES = {
NodeTypes.DEFAULT: nodes.CoreNode, NodeTypes.DEFAULT: nodes.CoreNode,
NodeTypes.PHYSICAL: pnodes.PhysicalNode, NodeTypes.PHYSICAL: pnodes.PhysicalNode,
NodeTypes.XEN: xen.XenNode,
NodeTypes.TBD: None, NodeTypes.TBD: None,
NodeTypes.SWITCH: nodes.SwitchNode, NodeTypes.SWITCH: nodes.SwitchNode,
NodeTypes.HUB: nodes.HubNode, NodeTypes.HUB: nodes.HubNode,

View file

@ -46,11 +46,15 @@ class Sdt(object):
DEFAULT_ALT = 2500 DEFAULT_ALT = 2500
# TODO: read in user"s nodes.conf here; below are default node types from the GUI # TODO: read in user"s nodes.conf here; below are default node types from the GUI
DEFAULT_SPRITES = [ DEFAULT_SPRITES = [
("router", "router.gif"), ("host", "host.gif"), ("router", "router.gif"),
("PC", "pc.gif"), ("mdr", "mdr.gif"), ("host", "host.gif"),
("prouter", "router_green.gif"), ("xen", "xen.gif"), ("PC", "pc.gif"),
("hub", "hub.gif"), ("lanswitch", "lanswitch.gif"), ("mdr", "mdr.gif"),
("wlan", "wlan.gif"), ("rj45", "rj45.gif"), ("prouter", "router_green.gif"),
("hub", "hub.gif"),
("lanswitch", "lanswitch.gif"),
("wlan", "wlan.gif"),
("rj45", "rj45.gif"),
("tunnel", "tunnel.gif"), ("tunnel", "tunnel.gif"),
] ]
@ -404,8 +408,7 @@ class Sdt(object):
net = False net = False
if nodetype == NodeTypes.DEFAULT.value or \ if nodetype == NodeTypes.DEFAULT.value or \
nodetype == NodeTypes.PHYSICAL.value or \ nodetype == NodeTypes.PHYSICAL.value:
nodetype == NodeTypes.XEN.value:
if model is None: if model is None:
model = "router" model = "router"
type = model type = model

View file

@ -87,7 +87,7 @@ class CoreServices(ConfigurableManager):
name = "services" name = "services"
config_type = RegisterTlvs.UTILITY.value config_type = RegisterTlvs.UTILITY.value
_invalid_custom_names = ('core', 'api', 'emane', 'misc', 'netns', 'phys', 'services', 'xen') _invalid_custom_names = ('core', 'api', 'emane', 'misc', 'netns', 'phys', 'services')
def __init__(self, session): def __init__(self, session):
""" """

View file

@ -45,7 +45,6 @@ from core.mobility import Ns2ScriptedMobility
from core.netns import nodes from core.netns import nodes
from core.sdt import Sdt from core.sdt import Sdt
from core.service import CoreServices from core.service import CoreServices
from core.xen.xenconfig import XenConfigManager
from core.xml.xmlsession import save_session_xml from core.xml.xmlsession import save_session_xml
# set default node map # set default node map
@ -190,10 +189,6 @@ class Session(object):
self.emane = EmaneManager(session=self) self.emane = EmaneManager(session=self)
self.add_config_object(EmaneManager.name, EmaneManager.config_type, self.emane.configure) self.add_config_object(EmaneManager.name, EmaneManager.config_type, self.emane.configure)
# setup xen
self.xen = XenConfigManager(session=self)
self.add_config_object(XenConfigManager.name, XenConfigManager.config_type, self.xen.configure)
# setup sdt # setup sdt
self.sdt = Sdt(session=self) self.sdt = Sdt(session=self)

View file

@ -1,789 +0,0 @@
"""
xen.py: implementation of the XenNode and XenVEth classes that support
generating Xen domUs based on an ISO image and persistent configuration area
"""
import base64
import os
import shutil
import string
import subprocess
import sys
import threading
import crypt
from core import CoreCommandError
from core import constants
from core import logger
from core.coreobj import PyCoreNetIf
from core.coreobj import PyCoreNode
from core.enumerations import NodeTypes
from core.misc import nodeutils
from core.misc import utils
from core.netns.vnode import LxcNode
try:
import parted
except ImportError:
logger.error("failed to import parted for xen nodes")
try:
import fsimage
except ImportError:
# fix for fsimage under Ubuntu
sys.path.append("/usr/lib/xen-default/lib/python")
try:
import fsimage
except ImportError:
logger.error("failed to import fsimage for xen nodes")
# XXX move these out to config file
AWK_PATH = "/bin/awk"
KPARTX_PATH = "/sbin/kpartx"
LVCREATE_PATH = "/sbin/lvcreate"
LVREMOVE_PATH = "/sbin/lvremove"
LVCHANGE_PATH = "/sbin/lvchange"
MKFSEXT4_PATH = "/sbin/mkfs.ext4"
MKSWAP_PATH = "/sbin/mkswap"
TAR_PATH = "/bin/tar"
SED_PATH = "/bin/sed"
XM_PATH = "/usr/sbin/xm"
UDEVADM_PATH = "/sbin/udevadm"
class XenVEth(PyCoreNetIf):
def __init__(self, node, name, localname, mtu=1500, net=None, start=True, hwaddr=None):
# note that net arg is ignored
PyCoreNetIf.__init__(self, node=node, name=name, mtu=mtu)
self.localname = localname
self.up = False
self.hwaddr = hwaddr
if start:
self.startup()
def startup(self):
cmd = [XM_PATH, "network-attach", self.node.vmname, "vifname=%s" % self.localname, "script=vif-core"]
if self.hwaddr is not None:
cmd.append("mac=%s" % self.hwaddr)
subprocess.check_call(cmd)
subprocess.check_call([constants.IP_BIN, "link", "set", self.localname, "up"])
self.up = True
def shutdown(self):
if not self.up:
return
if self.localname:
if self.hwaddr is not None:
pass
# this should be doable, but some argument isn"t a string
# check_call([XM_PATH, "network-detach", self.node.vmname,
# self.hwaddr])
self.up = False
class XenNode(PyCoreNode):
apitype = NodeTypes.XEN.value
files_to_ignore = frozenset([
# "ipforward.sh",
"quaggaboot.sh",
])
files_redirection = {
"ipforward.sh": "/core-tmp/ipforward.sh",
}
cmds_to_ignore = frozenset([
# "sh ipforward.sh",
# "sh quaggaboot.sh zebra",
# "sh quaggaboot.sh ospfd",
# "sh quaggaboot.sh ospf6d",
"killall zebra",
"killall ospfd",
"killall ospf6d",
"pidof zebra", "pidof ospfd", "pidof ospf6d",
])
def redir_cmd_ipforward(self):
sysctl_file = open(os.path.join(self.mountdir, self.etcdir, "sysctl.conf"), "a")
p1 = subprocess.Popen([AWK_PATH, "/^\/sbin\/sysctl -w/ {print $NF}",
os.path.join(self.nodedir, "core-tmp/ipforward.sh")], stdout=sysctl_file)
p1.wait()
sysctl_file.close()
def redir_cmd_zebra(self):
subprocess.check_call([SED_PATH, "-i", "-e", "s/^zebra=no/zebra=yes/",
os.path.join(self.mountdir, self.etcdir, "quagga/daemons")])
def redir_cmd_ospfd(self):
subprocess.check_call([SED_PATH, "-i", "-e", "s/^ospfd=no/ospfd=yes/",
os.path.join(self.mountdir, self.etcdir, "quagga/daemons")])
def redir_cmd_ospf6d(self):
subprocess.check_call([SED_PATH, "-i", "-e", "s/^ospf6d=no/ospf6d=yes/",
os.path.join(self.mountdir, self.etcdir, "quagga/daemons")])
cmds_redirection = {
"sh ipforward.sh": redir_cmd_ipforward,
"sh quaggaboot.sh zebra": redir_cmd_zebra,
"sh quaggaboot.sh ospfd": redir_cmd_ospfd,
"sh quaggaboot.sh ospf6d": redir_cmd_ospf6d,
}
# CoreNode: no __init__, take from LxcNode & SimpleLxcNode
def __init__(self, session, objid=None, name=None,
nodedir=None, bootsh="boot.sh", start=True, model=None,
vgname=None, ramsize=None, disksize=None,
isofile=None):
# SimpleLxcNode initialization
PyCoreNode.__init__(self, session=session, objid=objid, name=name)
self.nodedir = nodedir
self.model = model
# indicates startup() has been invoked and disk has been initialized
self.up = False
# indicates boot() has been invoked and domU is running
self.booted = False
self.ifindex = 0
self.lock = threading.RLock()
self._netif = {}
# domU name
self.vmname = "c" + str(session.session_id) + "-" + name
# LVM volume group name
self.vgname = self.getconfigitem("vg_name", vgname)
# LVM logical volume name
self.lvname = self.vmname + "-"
# LVM logical volume device path name
self.lvpath = os.path.join("/dev", self.vgname, self.lvname)
self.disksize = self.getconfigitem("disk_size", disksize)
self.ramsize = int(self.getconfigitem("ram_size", ramsize))
self.isofile = self.getconfigitem("iso_file", isofile)
# temporary mount point for paused VM persistent filesystem
self.mountdir = None
self.etcdir = self.getconfigitem("etc_path")
# TODO: remove this temporary hack
self.files_redirection["/usr/local/etc/quagga/Quagga.conf"] = os.path.join(
self.getconfigitem("mount_path"), self.etcdir, "quagga/Quagga.conf")
# LxcNode initialization
# self.makenodedir()
if self.nodedir is None:
self.nodedir = os.path.join(session.session_dir, self.name + ".conf")
self.mountdir = self.nodedir + self.getconfigitem("mount_path")
if not os.path.isdir(self.mountdir):
os.makedirs(self.mountdir)
self.tmpnodedir = True
else:
raise Exception("Xen PVM node requires a temporary nodedir")
self.bootsh = bootsh
if start:
self.startup()
def getconfigitem(self, name, default=None):
"""
Configuration items come from the xen.conf file and/or input from
the GUI, and are stored in the session using the XenConfigManager
object. self.model is used to identify particular profiles
associated with a node type in the GUI.
"""
return self.session.xen.getconfigitem(name=name, model=self.model, node=self, value=default)
# from class LxcNode (also SimpleLxcNode)
def startup(self):
logger.warn("XEN PVM startup() called: preparing disk for %s", self.name)
self.lock.acquire()
try:
if self.up:
raise Exception("already up")
self.createlogicalvolume()
self.createpartitions()
persistdev = self.createfilesystems()
subprocess.check_call([constants.MOUNT_BIN, "-t", "ext4", persistdev, self.mountdir])
self.untarpersistent(tarname=self.getconfigitem("persist_tar_iso"),
iso=True)
self.setrootpassword(pw=self.getconfigitem("root_password"))
self.sethostname(old="UBASE", new=self.name)
self.setupssh(keypath=self.getconfigitem("ssh_key_path"))
self.createvm()
self.up = True
finally:
self.lock.release()
# from class LxcNode (also SimpleLxcNode)
def boot(self):
logger.warn("XEN PVM boot() called")
self.lock.acquire()
if not self.up:
raise Exception("Can't boot VM without initialized disk")
if self.booted:
self.lock.release()
return
self.session.services.bootnodeservices(self)
tarname = self.getconfigitem("persist_tar")
if tarname:
self.untarpersistent(tarname=tarname, iso=False)
try:
subprocess.check_call([constants.UMOUNT_BIN, self.mountdir])
self.unmount_all(self.mountdir)
subprocess.check_call([UDEVADM_PATH, "settle"])
subprocess.check_call([KPARTX_PATH, "-d", self.lvpath])
# time.sleep(5)
# time.sleep(1)
# unpause VM
logger.warn("XEN PVM boot() unpause domU %s", self.vmname)
utils.check_cmd([XM_PATH, "unpause", self.vmname])
self.booted = True
finally:
self.lock.release()
def validate(self):
self.session.services.validatenodeservices(self)
# from class LxcNode (also SimpleLxcNode)
def shutdown(self):
logger.warn("XEN PVM shutdown() called")
if not self.up:
return
self.lock.acquire()
try:
if self.up:
# sketch from SimpleLxcNode
for netif in self.netifs():
netif.shutdown()
try:
# RJE XXX what to do here
if self.booted:
utils.check_cmd([XM_PATH, "destroy", self.vmname])
self.booted = False
except CoreCommandError:
# ignore this error too, the VM may have exited already
logger.exception("error during shutdown")
# discard LVM volume
lvm_remove_count = 0
while os.path.exists(self.lvpath):
try:
subprocess.check_call([UDEVADM_PATH, "settle"])
utils.check_cmd([LVCHANGE_PATH, "-an", self.lvpath])
lvm_remove_count += 1
utils.check_cmd([LVREMOVE_PATH, "-f", self.lvpath])
except OSError:
logger.exception("error during shutdown")
if lvm_remove_count > 1:
logger.warn("XEN PVM shutdown() required %d lvremove executions.", lvm_remove_count)
self._netif.clear()
del self.session
self.up = False
finally:
self.rmnodedir()
self.lock.release()
def createlogicalvolume(self):
"""
Create a logical volume for this Xen domU. Called from startup().
"""
if os.path.exists(self.lvpath):
raise Exception, "LVM volume already exists"
utils.check_cmd([LVCREATE_PATH, "--size", self.disksize,
"--name", self.lvname, self.vgname])
def createpartitions(self):
"""
Partition the LVM volume into persistent and swap partitions
using the parted module.
"""
dev = parted.Device(path=self.lvpath)
dev.removeFromCache()
disk = parted.freshDisk(dev, "msdos")
constraint = parted.Constraint(device=dev)
persist_size = int(0.75 * constraint.maxSize)
self.createpartition(device=dev, disk=disk, start=1,
end=persist_size - 1, type="ext4")
self.createpartition(device=dev, disk=disk, start=persist_size,
end=constraint.maxSize - 1, type="linux-swap(v1)")
disk.commit()
def createpartition(self, device, disk, start, end, type):
"""
Create a single partition of the specified type and size and add
it to the disk object, using the parted module.
"""
geo = parted.Geometry(device=device, start=start, end=end)
fs = parted.FileSystem(type=type, geometry=geo)
part = parted.Partition(disk=disk, fs=fs, type=parted.PARTITION_NORMAL, geometry=geo)
constraint = parted.Constraint(exactGeom=geo)
disk.addPartition(partition=part, constraint=constraint)
def createfilesystems(self):
"""
Make an ext4 filesystem and swap space. Return the device name for
the persistent partition so we can mount it.
"""
output = subprocess.Popen([KPARTX_PATH, "-l", self.lvpath],
stdout=subprocess.PIPE).communicate()[0]
lines = output.splitlines()
persistdev = "/dev/mapper/" + lines[0].strip().split(" ")[0].strip()
swapdev = "/dev/mapper/" + lines[1].strip().split(" ")[0].strip()
subprocess.check_call([KPARTX_PATH, "-a", self.lvpath])
utils.check_cmd([MKFSEXT4_PATH, "-L", "persist", persistdev])
utils.check_cmd([MKSWAP_PATH, "-f", "-L", "swap", swapdev])
return persistdev
def untarpersistent(self, tarname, iso):
"""
Unpack a persistent template tar file to the mounted mount dir.
Uses fsimage library to read from an ISO file.
"""
# filename may use hostname
tarname = tarname.replace("%h", self.name)
if iso:
try:
fs = fsimage.open(self.isofile, 0)
except IOError:
logger.exception("Failed to open ISO file: %s", self.isofile)
return
try:
tardata = fs.open_file(tarname).read()
except IOError:
logger.exception("Failed to open tar file: %s", tarname)
return
finally:
del fs
else:
try:
f = open(tarname)
tardata = f.read()
f.close()
except IOError:
logger.exception("Failed to open tar file: %s", tarname)
return
p = subprocess.Popen([TAR_PATH, "-C", self.mountdir, "--numeric-owner",
"-xf", "-"], stdin=subprocess.PIPE)
p.communicate(input=tardata)
p.wait()
def setrootpassword(self, pw):
"""
Set the root password by updating the shadow password file that
is on the filesystem mounted in the temporary area.
"""
saltedpw = crypt.crypt(pw, "$6$" + base64.b64encode(os.urandom(12)))
subprocess.check_call([SED_PATH, "-i", "-e",
"/^root:/s_^root:\([^:]*\):_root:" + saltedpw + ":_",
os.path.join(self.mountdir, self.etcdir, "shadow")])
def sethostname(self, old, new):
"""
Set the hostname by updating the hostname and hosts files that
reside on the filesystem mounted in the temporary area.
"""
subprocess.check_call([SED_PATH, "-i", "-e", "s/%s/%s/" % (old, new),
os.path.join(self.mountdir, self.etcdir, "hostname")])
subprocess.check_call([SED_PATH, "-i", "-e", "s/%s/%s/" % (old, new),
os.path.join(self.mountdir, self.etcdir, "hosts")])
def setupssh(self, keypath):
"""
Configure SSH access by installing host keys and a system-wide
authorized_keys file.
"""
sshdcfg = os.path.join(self.mountdir, self.etcdir, "ssh/sshd_config")
subprocess.check_call([SED_PATH, "-i", "-e", "s/PermitRootLogin no/PermitRootLogin yes/", sshdcfg])
sshdir = os.path.join(self.getconfigitem("mount_path"), self.etcdir, "ssh")
# backslash slashes for use in sed
sshdir = sshdir.replace("/", "\\/")
subprocess.check_call([SED_PATH, "-i", "-e",
"s/#AuthorizedKeysFile %h\/.ssh\/authorized_keys/" + \
"AuthorizedKeysFile " + sshdir + "\/authorized_keys/",
sshdcfg])
for f in "ssh_host_rsa_key", "ssh_host_rsa_key.pub", "authorized_keys":
src = os.path.join(keypath, f)
dst = os.path.join(self.mountdir, self.etcdir, "ssh", f)
shutil.copy(src, dst)
if f[-3:] != "pub":
os.chmod(dst, 0600)
def createvm(self):
"""
Instantiate a *paused* domU VM
Instantiate it now, so we can add network interfaces,
pause it so we can have the filesystem open for configuration.
"""
args = [XM_PATH, "create", os.devnull, "--paused"]
args.extend(["name=" + self.vmname, "memory=" + str(self.ramsize)])
args.append("disk=tap:aio:" + self.isofile + ",hda,r")
args.append("disk=phy:" + self.lvpath + ",hdb,w")
args.append("bootloader=pygrub")
bootargs = "--kernel=/isolinux/vmlinuz --ramdisk=/isolinux/initrd"
args.append("bootargs=" + bootargs)
for action in ("poweroff", "reboot", "suspend", "crash", "halt"):
args.append("on_%s=destroy" % action)
args.append("extra=" + self.getconfigitem("xm_create_extra"))
utils.check_cmd(args)
# from class LxcNode
def privatedir(self, path):
# self.warn("XEN PVM privatedir() called")
# Do nothing, Xen PVM nodes are fully private
pass
# from class LxcNode
def opennodefile(self, filename, mode="w"):
logger.warn("XEN PVM opennodefile() called")
raise Exception("Can't open VM file with opennodefile()")
# from class LxcNode
# open a file on a paused Xen node
def openpausednodefile(self, filename, mode="w"):
dirname, basename = os.path.split(filename)
if not basename:
raise ValueError("no basename for filename: %s" % filename)
if dirname and dirname[0] == "/":
dirname = dirname[1:]
# dirname = dirname.replace("/", ".")
dirname = os.path.join(self.nodedir, dirname)
if not os.path.isdir(dirname):
os.makedirs(dirname, mode=0755)
hostfilename = os.path.join(dirname, basename)
return open(hostfilename, mode)
# from class LxcNode
def nodefile(self, filename, contents, mode=0644):
if filename in self.files_to_ignore:
# self.warn("XEN PVM nodefile(filename=%s) ignored" % [filename])
return
if filename in self.files_redirection:
redirection_filename = self.files_redirection[filename]
logger.warn("XEN PVM nodefile(filename=%s) redirected to %s", filename, redirection_filename)
filename = redirection_filename
logger.warn("XEN PVM nodefile(filename=%s) called", filename)
self.lock.acquire()
if not self.up:
self.lock.release()
raise Exception("Can't access VM file as VM disk isn't ready")
if self.booted:
self.lock.release()
raise Exception("Can't access VM file as VM is already running")
try:
f = self.openpausednodefile(filename, "w")
f.write(contents)
os.chmod(f.name, mode)
f.close()
logger.info("created nodefile: %s; mode: 0%o", f.name, mode)
finally:
self.lock.release()
# from class SimpleLxcNode
def alive(self):
# is VM running?
return False # XXX
def cmd(self, args, wait=True):
cmd_string = string.join(args, " ")
if cmd_string in self.cmds_to_ignore:
# self.warn("XEN PVM cmd(args=[%s]) called and ignored" % cmdAsString)
return 0
if cmd_string in self.cmds_redirection:
self.cmds_redirection[cmd_string](self)
return 0
logger("XEN PVM cmd(args=[%s]) called, but not yet implemented", cmd_string)
return 0
def cmdresult(self, args):
cmd_string = string.join(args, " ")
if cmd_string in self.cmds_to_ignore:
# self.warn("XEN PVM cmd(args=[%s]) called and ignored" % cmdAsString)
return 0, ""
logger.warn("XEN PVM cmdresult(args=[%s]) called, but not yet implemented", cmd_string)
return 0, ""
def popen(self, args):
cmd_string = string.join(args, " ")
logger.warn("XEN PVM popen(args=[%s]) called, but not yet implemented", cmd_string)
return
def icmd(self, args):
cmd_string = string.join(args, " ")
logger.warn("XEN PVM icmd(args=[%s]) called, but not yet implemented", cmd_string)
return
def term(self, sh="/bin/sh"):
logger.warn("XEN PVM term() called, but not yet implemented")
return
def termcmdstring(self, sh="/bin/sh"):
"""
We may add "sudo" to the command string because the GUI runs as a
normal user. Use SSH if control interface is available, otherwise
use Xen console with a keymapping for easy login.
"""
controlifc = None
for ifc in self.netifs():
if hasattr(ifc, "control") and ifc.control is True:
controlifc = ifc
break
cmd = "xterm "
# use SSH if control interface is available
if controlifc:
controlip = controlifc.addrlist[0].split("/")[0]
cmd += "-e ssh root@%s" % controlip
return cmd
# otherwise use "xm console"
# pw = self.getconfigitem("root_password")
# cmd += "-xrm "XTerm*VT100.translations: #override <Key>F1: "
# cmd += "string(\"root\\n\") \\n <Key>F2: string(\"%s\\n\")" " % pw
cmd += "-e sudo %s console %s" % (XM_PATH, self.vmname)
return cmd
def shcmd(self, cmdstr, sh="/bin/sh"):
logger("XEN PVM shcmd(args=[%s]) called, but not yet implemented", cmdstr)
return
def mount(self, source, target):
logger.warn("XEN PVM Nodes can't bind-mount filesystems")
def umount(self, target):
logger.warn("XEN PVM Nodes can't bind-mount filesystems")
def newifindex(self):
self.lock.acquire()
try:
while self.ifindex in self._netif:
self.ifindex += 1
ifindex = self.ifindex
self.ifindex += 1
return ifindex
finally:
self.lock.release()
def getifindex(self, netif):
for ifindex in self._netif:
if self._netif[ifindex] is netif:
return ifindex
return -1
def addnetif(self, netif, ifindex):
logger.warn("XEN PVM addnetif() called")
PyCoreNode.addnetif(self, netif, ifindex)
def delnetif(self, ifindex):
logger.warn("XEN PVM delnetif() called")
PyCoreNode.delnetif(self, ifindex)
def newveth(self, ifindex=None, ifname=None, net=None, hwaddr=None):
logger.warn("XEN PVM newveth(ifindex=%s, ifname=%s) called", ifindex, ifname)
self.lock.acquire()
try:
if ifindex is None:
ifindex = self.newifindex()
if ifname is None:
ifname = "eth%d" % ifindex
sessionid = self.session.short_session_id()
name = "n%s.%s.%s" % (self.objid, ifindex, sessionid)
localname = "n%s.%s.%s" % (self.objid, ifname, sessionid)
ifclass = XenVEth
veth = ifclass(node=self, name=name, localname=localname,
mtu=1500, net=net, hwaddr=hwaddr)
veth.name = ifname
try:
self.addnetif(veth, ifindex)
except:
veth.shutdown()
del veth
raise
return ifindex
finally:
self.lock.release()
def newtuntap(self, ifindex=None, ifname=None, net=None):
logger.warn("XEN PVM newtuntap() called but not implemented")
def sethwaddr(self, ifindex, addr):
self._netif[ifindex].sethwaddr(addr)
if self.up:
pass
# self.cmd([IP_BIN, "link", "set", "dev", self.ifname(ifindex),
# "address", str(addr)])
def addaddr(self, ifindex, addr):
if self.up:
pass
# self.cmd([IP_BIN, "addr", "add", str(addr),
# "dev", self.ifname(ifindex)])
self._netif[ifindex].addaddr(addr)
def deladdr(self, ifindex, addr):
try:
self._netif[ifindex].deladdr(addr)
except ValueError:
logger.exception("trying to delete unknown address: %s", addr)
if self.up:
pass
# self.cmd([IP_BIN, "addr", "del", str(addr),
# "dev", self.ifname(ifindex)])
valid_deladdrtype = ("inet", "inet6", "inet6link")
def delalladdr(self, ifindex, addrtypes=valid_deladdrtype):
addr = self.getaddr(self.ifname(ifindex), rescan=True)
for t in addrtypes:
if t not in self.valid_deladdrtype:
raise ValueError("addr type must be in: " + " ".join(self.valid_deladdrtype))
for a in addr[t]:
self.deladdr(ifindex, a)
# update cached information
self.getaddr(self.ifname(ifindex), rescan=True)
# Xen PVM relies on boot process to bring up links
# def ifup(self, ifindex):
# if self.up:
# self.cmd([IP_BIN, "link", "set", self.ifname(ifindex), "up"])
def newnetif(self, net=None, addrlist=[], hwaddr=None, ifindex=None, ifname=None):
logger.warn("XEN PVM newnetif(ifindex=%s, ifname=%s) called", ifindex, ifname)
self.lock.acquire()
if not self.up:
self.lock.release()
raise Exception("Can't access add veth as VM disk isn't ready")
if self.booted:
self.lock.release()
raise Exception("Can't access add veth as VM is already running")
try:
if nodeutils.is_node(net, NodeTypes.EMANE):
raise Exception("Xen PVM doesn't yet support Emane nets")
# ifindex = self.newtuntap(ifindex = ifindex, ifname = ifname,
# net = net)
# # TUN/TAP is not ready for addressing yet; the device may
# # take some time to appear, and installing it into a
# # namespace after it has been bound removes addressing;
# # save addresses with the interface now
# self.attachnet(ifindex, net)
# netif = self.netif(ifindex)
# netif.sethwaddr(hwaddr)
# for addr in maketuple(addrlist):
# netif.addaddr(addr)
# return ifindex
else:
ifindex = self.newveth(ifindex=ifindex, ifname=ifname,
net=net, hwaddr=hwaddr)
if net is not None:
self.attachnet(ifindex, net)
rulefile = os.path.join(self.getconfigitem("mount_path"),
self.etcdir,
"udev/rules.d/70-persistent-net.rules")
f = self.openpausednodefile(rulefile, "a")
f.write(
"\n# Xen PVM virtual interface #%s %s with MAC address %s\n" % (ifindex, self.ifname(ifindex), hwaddr))
# Using MAC address as we"re now loading PVM net driver "early"
# OLD: Would like to use MAC address, but udev isn"t working with paravirtualized NICs. Perhaps the "set hw address" isn"t triggering a rescan.
f.write(
'SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="%s", KERNEL=="eth*", NAME="%s"\n' % (
hwaddr, self.ifname(ifindex)))
# f.write("SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", DEVPATH=="/devices/vif-%s/?*", KERNEL=="eth*", NAME="%s"\n" % (ifindex, self.ifname(ifindex)))
f.close()
if hwaddr:
self.sethwaddr(ifindex, hwaddr)
for addr in utils.make_tuple(addrlist):
self.addaddr(ifindex, addr)
# self.ifup(ifindex)
return ifindex
finally:
self.lock.release()
def connectnode(self, ifname, othernode, otherifname):
logger.warn("XEN PVM connectnode() called")
# tmplen = 8
# tmp1 = "tmp." + "".join([random.choice(string.ascii_lowercase)
# for x in xrange(tmplen)])
# tmp2 = "tmp." + "".join([random.choice(string.ascii_lowercase)
# for x in xrange(tmplen)])
# check_call([IP_BIN, "link", "add", "name", tmp1,
# "type", "veth", "peer", "name", tmp2])
#
# check_call([IP_BIN, "link", "set", tmp1, "netns", str(self.pid)])
# self.cmd([IP_BIN, "link", "set", tmp1, "name", ifname])
# self.addnetif(PyCoreNetIf(self, ifname), self.newifindex())
#
# check_call([IP_BIN, "link", "set", tmp2, "netns", str(othernode.pid)])
# othernode.cmd([IP_BIN, "link", "set", tmp2, "name", otherifname])
# othernode.addnetif(PyCoreNetIf(othernode, otherifname),
# othernode.newifindex())
def addfile(self, srcname, filename, mode=0644):
self.lock.acquire()
if not self.up:
self.lock.release()
raise Exception("Can't access VM file as VM disk isn't ready")
if self.booted:
self.lock.release()
raise Exception("Can't access VM file as VM is already running")
if filename in self.files_to_ignore:
# self.warn("XEN PVM addfile(filename=%s) ignored" % [filename])
return
if filename in self.files_redirection:
redirection_filename = self.files_redirection[filename]
logger.warn("XEN PVM addfile(filename=%s) redirected to %s", filename, redirection_filename)
filename = redirection_filename
try:
fin = open(srcname, "r")
contents = fin.read()
fin.close()
fout = self.openpausednodefile(filename, "w")
fout.write(contents)
os.chmod(fout.name, mode)
fout.close()
logger.info("created nodefile: %s; mode: 0%o", fout.name, mode)
finally:
self.lock.release()
logger.warn("XEN PVM addfile(filename=%s) called", filename)
# shcmd = "mkdir -p $(dirname "%s") && mv "%s" "%s" && sync" % \
# (filename, srcname, filename)
# self.shcmd(shcmd)
def unmount_all(self, path):
"""
Namespaces inherit the host mounts, so we need to ensure that all
namespaces have unmounted our temporary mount area so that the
kpartx command will succeed.
"""
# Session.bootnodes() already has self.session._objslock
for o in self.session.objects.itervalues():
if not isinstance(o, LxcNode):
continue
o.umount(path)

View file

@ -1,301 +0,0 @@
"""
xenconfig.py: Implementation of the XenConfigManager class for managing
configurable items for XenNodes.
Configuration for a XenNode is available at these three levels:
Global config: XenConfigManager.configs[0] = (type="xen", values)
Nodes of this machine type have this config. These are the default values.
XenConfigManager.default_config comes from defaults + xen.conf
Node type config: XenConfigManager.configs[0] = (type="mytype", values)
All nodes of this type have this config.
Node-specific config: XenConfigManager.configs[nodenumber] = (type, values)
The node having this specific number has this config.
"""
import ConfigParser
import os
import string
from core import constants
from core import logger
from core.api import coreapi
from core.conf import Configurable
from core.conf import ConfigurableManager
from core.enumerations import ConfigDataTypes
from core.enumerations import ConfigFlags
from core.enumerations import ConfigTlvs
from core.enumerations import RegisterTlvs
class XenConfigManager(ConfigurableManager):
"""
Xen controller object. Lives in a Session instance and is used for
building Xen profiles.
"""
name = "xen"
config_type = RegisterTlvs.EMULATION_SERVER.value
def __init__(self, session):
"""
Creates a XenConfigManager instance.
:param core.session.Session session: session this manager is tied to
:return: nothing
"""
ConfigurableManager.__init__(self)
self.default_config = XenDefaultConfig(session, object_id=None)
self.loadconfigfile()
def setconfig(self, nodenum, conftype, values):
"""
Add configuration values for a node to a dictionary; values are
usually received from a Configuration Message, and may refer to a
node for which no object exists yet
:param int nodenum: node id to configure
:param str conftype: configuration type
:param tuple values: values to configure
:return: None
"""
# used for storing the global default config
if nodenum is None:
nodenum = 0
return ConfigurableManager.setconfig(self, nodenum, conftype, values)
def getconfig(self, nodenum, conftype, defaultvalues):
"""
Get configuration values for a node; if the values don"t exist in
our dictionary then return the default values supplied; if conftype
is None then we return a match on any conftype.
:param int nodenum: node id to configure
:param str conftype: configuration type
:param tuple defaultvalues: default values to return
:return: configuration for node and config type
:rtype: tuple
"""
# used for storing the global default config
if nodenum is None:
nodenum = 0
return ConfigurableManager.getconfig(self, nodenum, conftype, defaultvalues)
def clearconfig(self, nodenum):
"""
Remove configuration values for a node
:param int nodenum: node id to clear config
:return: nothing
"""
ConfigurableManager.clearconfig(self, nodenum)
if 0 in self.configs:
self.configs.pop(0)
def configure(self, session, config_data):
"""
Handle configuration messages for global Xen config.
:param core.conf.ConfigData config_data: configuration data for carrying out a configuration
"""
return self.default_config.configure(self, config_data)
def loadconfigfile(self, filename=None):
"""
Load defaults from the /etc/core/xen.conf file into dict object.
:param str filename: file name of configuration to load
:return: nothing
"""
if filename is None:
filename = os.path.join(constants.CORE_CONF_DIR, "xen.conf")
cfg = ConfigParser.SafeConfigParser()
if filename not in cfg.read(filename):
logger.warn("unable to read Xen config file: %s", filename)
return
section = "xen"
if not cfg.has_section(section):
logger.warn("%s is missing a xen section!", filename)
return
self.configfile = dict(cfg.items(section))
# populate default config items from config file entries
vals = list(self.default_config.getdefaultvalues())
names = self.default_config.getnames()
for i in range(len(names)):
if names[i] in self.configfile:
vals[i] = self.configfile[names[i]]
# this sets XenConfigManager.configs[0] = (type="xen", vals)
self.setconfig(None, self.default_config.name, vals)
def getconfigitem(self, name, model=None, node=None, value=None):
"""
Get a config item of the given name, first looking for node-specific
configuration, then model specific, and finally global defaults.
If a value is supplied, it will override any stored config.
:param str name: name of config item to get
:param model: model config to get
:param node: node config to get
:param value: value to override stored config, if provided
:return: nothing
"""
if value is not None:
return value
n = None
if node:
n = node.objid
(t, v) = self.getconfig(nodenum=n, conftype=model, defaultvalues=None)
if n is not None and v is None:
# get item from default config for the node type
(t, v) = self.getconfig(nodenum=None, conftype=model, defaultvalues=None)
if v is None:
# get item from default config for the machine type
(t, v) = self.getconfig(nodenum=None, conftype=self.default_config.name, defaultvalues=None)
confignames = self.default_config.getnames()
if v and name in confignames:
i = confignames.index(name)
return v[i]
else:
# name may only exist in config file
if name in self.configfile:
return self.configfile[name]
else:
# logger.warn("missing config item "%s"" % name)
return None
class XenConfig(Configurable):
"""
Manage Xen configuration profiles.
"""
@classmethod
def configure(cls, xen, config_data):
"""
Handle configuration messages for setting up a model.
Similar to Configurable.configure(), but considers opaque data
for indicating node types.
:param xen: xen instance to configure
:param core.conf.ConfigData config_data: configuration data for carrying out a configuration
"""
reply = None
node_id = config_data.node
object_name = config_data.object
config_type = config_data.type
opaque = config_data.opaque
values_str = config_data.data_values
nodetype = object_name
if opaque is not None:
opaque_items = opaque.split(":")
if len(opaque_items) != 2:
logger.warn("xen config: invalid opaque data in conf message")
return None
nodetype = opaque_items[1]
logger.info("received configure message for %s", nodetype)
if config_type == ConfigFlags.REQUEST.value:
logger.info("replying to configure request for %s " % nodetype)
# when object name is "all", the reply to this request may be None
# if this node has not been configured for this model; otherwise we
# reply with the defaults for this model
if object_name == "all":
typeflags = ConfigFlags.UPDATE.value
else:
typeflags = ConfigFlags.NONE.value
values = xen.getconfig(node_id, nodetype, defaultvalues=None)[1]
if values is None:
# get defaults from default "xen" config which includes
# settings from both cls._confdefaultvalues and xen.conf
defaults = cls.getdefaultvalues()
values = xen.getconfig(node_id, cls.name, defaults)[1]
if values is None:
return None
# reply with config options
if node_id is None:
node_id = 0
reply = cls.config_data(0, node_id, typeflags, nodetype, values)
elif config_type == ConfigFlags.RESET.value:
if object_name == "all":
xen.clearconfig(node_id)
# elif conftype == coreapi.CONF_TYPE_FLAGS_UPDATE:
else:
# store the configuration values for later use, when the XenNode
# object has been created
if object_name is None:
logger.info("no configuration object for node %s" % node_id)
return None
if values_str is None:
# use default or preconfigured values
defaults = cls.getdefaultvalues()
values = xen.getconfig(node_id, cls.name, defaults)[1]
else:
# use new values supplied from the conf message
values = values_str.split("|")
xen.setconfig(node_id, nodetype, values)
return reply
@classmethod
def config_data(cls, flags, node_id, type_flags, nodetype, values):
"""
Convert this class to a Config API message. Some TLVs are defined
by the class, but node number, conf type flags, and values must
be passed in.
:param int flags: configuration flags
:param int node_id: node id
:param int type_flags: type flags
:param int nodetype: node type
:param tuple values: values
:return: configuration message
"""
values_str = string.join(values, "|")
tlvdata = ""
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.NODE.value, node_id)
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.OBJECT.value, cls.name)
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.TYPE.value, type_flags)
datatypes = tuple(map(lambda x: x[1], cls.config_matrix))
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.DATA_TYPES.value, datatypes)
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.VALUES.value, values_str)
captions = reduce(lambda a, b: a + "|" + b, map(lambda x: x[4], cls.config_matrix))
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.CAPTIONS, captions)
possiblevals = reduce(lambda a, b: a + "|" + b, map(lambda x: x[3], cls.config_matrix))
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.POSSIBLE_VALUES.value, possiblevals)
if cls.bitmap is not None:
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.BITMAP.value, cls.bitmap)
if cls.config_groups is not None:
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.GROUPS.value, cls.config_groups)
opaque = "%s:%s" % (cls.name, nodetype)
tlvdata += coreapi.CoreConfigTlv.pack(ConfigTlvs.OPAQUE.value, opaque)
msg = coreapi.CoreConfMessage.pack(flags, tlvdata)
return msg
class XenDefaultConfig(XenConfig):
"""
Global default Xen configuration options.
"""
name = "xen"
# Configuration items:
# ("name", "type", "default", "possible-value-list", "caption")
config_matrix = [
("ram_size", ConfigDataTypes.STRING.value, "256", "",
"ram size (MB)"),
("disk_size", ConfigDataTypes.STRING.value, "256M", "",
"disk size (use K/M/G suffix)"),
("iso_file", ConfigDataTypes.STRING.value, "", "",
"iso file"),
("mount_path", ConfigDataTypes.STRING.value, "", "",
"mount path"),
("etc_path", ConfigDataTypes.STRING.value, "", "",
"etc path"),
("persist_tar_iso", ConfigDataTypes.STRING.value, "", "",
"iso persist tar file"),
("persist_tar", ConfigDataTypes.STRING.value, "", "",
"persist tar file"),
("root_password", ConfigDataTypes.STRING.value, "password", "",
"root password"),
]
config_groups = "domU properties:1-%d" % len(config_matrix)

View file

@ -202,8 +202,6 @@ class CoreDocumentParser0(object):
mgr = None mgr = None
self.parsenetem(model, obj, kvs) self.parsenetem(model, obj, kvs)
elif name[:3] == "xen":
mgr = self.session.xen
# TODO: assign other config managers here # TODO: assign other config managers here
if mgr: if mgr:
mgr.setconfig_keyvalues(nodenum, name, kvs) mgr.setconfig_keyvalues(nodenum, name, kvs)

View file

@ -204,8 +204,6 @@ class CoreDocumentParser1(object):
mgr = self.session.mobility mgr = self.session.mobility
elif model_name.startswith('emane'): elif model_name.startswith('emane'):
mgr = self.session.emane mgr = self.session.emane
elif model_name.startswith('xen'):
mgr = self.session.xen
else: else:
# TODO: any other config managers? # TODO: any other config managers?
raise NotImplementedError raise NotImplementedError
@ -685,8 +683,6 @@ class CoreDocumentParser1(object):
'host': 'host.gif', 'host': 'host.gif',
'PC': 'pc.gif', 'PC': 'pc.gif',
'mdr': 'mdr.gif', 'mdr': 'mdr.gif',
# 'prouter': 'router_green.gif',
# 'xen': 'xen.gif'
} }
icon_set = False icon_set = False
for child in xmlutils.iter_children_with_name(element, 'CORE:presentation'): for child in xmlutils.iter_children_with_name(element, 'CORE:presentation'):

View file

@ -1,35 +0,0 @@
# Configuration file for CORE Xen support
### Xen configuration options ###
[xen]
### The following three configuration options *must* be specified in this
### system-wide configuration file.
# LVM volume group name for creating new volumes
vg_name = domU
# directory containing an RSA SSH host key and authorized_keys file to use
# within the VM
ssh_key_path = /opt/core-xen/ssh
# extra arguments to pass via 'extra=' option to 'xm create'
xm_create_extra = console=hvc0 rtr_boot=/dev/xvda rtr_boot_fstype=iso9660 rtr_root=/boot/root.img rtr_persist=LABEL=persist rtr_swap=LABEL=swap rtr_overlay_limit=500
### The remaining configuration options *may* be specified here.
### If not specified here, they *must* be specified in the user (or scenario's)
### nodes.conf file as profile-specific configuration options.
# domU RAM memory size in MB
ram_size = 256
# domU disk size in MB
disk_size = 256M
# ISO filesystem to mount as read-only
iso_file = /opt/core-xen/iso-files/rtr.iso
# directory used temporarily as moint point for persistent area, under
# /tmp/pycore.nnnnn/nX.conf/
mount_path = /rtr/persist
# mount_path + this directory where configuration files are located on the VM
etc_path = config/etc
# name of tar file within the iso_file to unpack to mount_path
persist_tar_iso = persist-template.tar
# name of tar file in dom0 that will be unpacked to mount_path prior to boot
# the string '%h' will be replaced with the hostname (e.g. 'n3' for node 3)
persist_tar = /opt/core-xen/rtr-configs/custom-%%h.tar
# root password to set
root_password = password

View file

@ -1,73 +0,0 @@
#!/bin/sh
if [ "z$1" = "z-h" -o "z$1" = "z--help" ]; then
echo "usage: $0 [-d]"
echo -n " Clean up all CORE Xen domUs, bridges, interfaces, "
echo "and session\n directories. Options:"
echo " -h show this help message and exit"
echo " -d also kill the Python daemon"
exit 0
fi
if [ `id -u` != 0 ]; then
echo "Permission denied. Re-run this script as root."
exit 1
fi
PATH="/sbin:/bin:/usr/sbin:/usr/bin"
export PATH
if [ "z$1" = "z-d" ]; then
pypids=`pidof python python2`
for p in $pypids; do
grep -q core-daemon /proc/$p/cmdline
if [ $? = 0 ]; then
echo "cleaning up core-daemon process: $p"
kill -9 $p
fi
done
fi
mount | awk '
/\/tmp\/pycore\./ { print "umount " $3; system("umount " $3); }
'
domus=`xm list | awk '
/^c.*-n.*/ { print $1; }'`
for domu in $domus
do
echo "destroy $domu"
xm destroy $domu
done
vgs=`vgs | awk '{ print $1; }'`
for vg in $vgs
do
if [ ! -x /dev/$vg ]; then
continue
fi
echo "searching volume group: $vg"
lvs=`ls /dev/$vg/c*-n*- 2> /dev/null`
for lv in $lvs
do
echo "removing volume $lv"
kpartx -d $lv
lvchange -an $lv
lvremove $lv
done
done
/sbin/ip link show | awk '
/b\.ctrlnet\.[0-9]+/ {print "removing interface " $2; system("ip link set " $2 " down; brctl delbr " $2); }
'
ls /sys/class/net | awk '
/^b\.[0-9]+\.[0-9]+$/ {print "removing interface " $1; system("ip link set " $1 " down; brctl delbr " $1); }
'
ebtables -L FORWARD | awk '
/^-.*b\./ {print "removing ebtables " $0; system("ebtables -D FORWARD " $0); print "removing ebtables chain " $4; system("ebtables -X " $4);}
'
rm -rf /tmp/pycore*

View file

@ -4,10 +4,9 @@ Defines how CORE will be built for installation.
import glob import glob
import os import os
from setuptools import setup, find_packages
from distutils.command.install import install from distutils.command.install import install
from setuptools import setup, find_packages
_CORE_DIR = "/etc/core" _CORE_DIR = "/etc/core"
_MAN_DIR = "/usr/local/share/man/man1" _MAN_DIR = "/usr/local/share/man/man1"
@ -59,14 +58,12 @@ class CustomInstall(install):
data_files = [ data_files = [
(_CORE_DIR, [ (_CORE_DIR, [
"data/core.conf", "data/core.conf",
"data/xen.conf",
"data/logging.conf", "data/logging.conf",
]), ]),
(_MAN_DIR, glob_files("../doc/man/**.1")), (_MAN_DIR, glob_files("../doc/man/**.1")),
] ]
data_files.extend(recursive_files(_SHARE_DIR, "examples")) data_files.extend(recursive_files(_SHARE_DIR, "examples"))
setup( setup(
name="core", name="core",
version="5.1", version="5.1",
@ -86,7 +83,6 @@ setup(
"sbin/core-daemon", "sbin/core-daemon",
"sbin/core-manage", "sbin/core-manage",
"sbin/coresendmsg", "sbin/coresendmsg",
"sbin/core-xen-cleanup",
], ],
description="Python components of CORE", description="Python components of CORE",
url="http://www.nrl.navy.mil/itd/ncs/products/core", url="http://www.nrl.navy.mil/itd/ncs/products/core",

View file

@ -449,9 +449,6 @@ This causes a separate init script to be installed that is tailored towards SUSE
The `zypper` command is used instead of `yum`. The `zypper` command is used instead of `yum`.
For OpenSUSE/Xen based installations, refer to the `README-Xen` file included
in the CORE source.
The Quagga routing suite is recommended for routing, The Quagga routing suite is recommended for routing,
:ref:`Quagga_Routing_Software` for installation. :ref:`Quagga_Routing_Software` for installation.

View file

@ -69,23 +69,3 @@ Double-clicking on a physical node during runtime
opens a terminal with an SSH shell to that opens a terminal with an SSH shell to that
node. Users should configure public-key SSH login as done with emulation node. Users should configure public-key SSH login as done with emulation
servers. servers.
.. _xen:
xen
===
.. index:: xen machine type
The *xen* machine type is an experimental new type in CORE for managing
Xen domUs from within CORE. After further development,
it may be documented here.
Current limitations include only supporting ISO-based filesystems, and lack
of integration with node services, EMANE, and possibly other features of CORE.
There is a :file:`README-Xen` file available in the CORE source that contains
further instructions for setting up Xen-based nodes.

View file

@ -12,7 +12,7 @@ if WANT_GUI
endif endif
if WANT_DAEMON if WANT_DAEMON
DAEMON_MANS = vnoded.1 vcmd.1 netns.1 core-daemon.1 coresendmsg.1 \ DAEMON_MANS = vnoded.1 vcmd.1 netns.1 core-daemon.1 coresendmsg.1 \
core-cleanup.1 core-xen-cleanup.1 core-manage.1 core-cleanup.1 core-manage.1
endif endif
man_MANS = $(GUI_MANS) $(DAEMON_MANS) man_MANS = $(GUI_MANS) $(DAEMON_MANS)
@ -25,7 +25,6 @@ generate-mans:
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-daemon -o core-daemon.1.new $(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-daemon -o core-daemon.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/coresendmsg -o coresendmsg.1.new $(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/coresendmsg -o coresendmsg.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-cleanup -o core-cleanup.1.new $(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-cleanup -o core-cleanup.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-xen-cleanup -o core-xen-cleanup.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-manage -o core-manage.1.new $(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-manage -o core-manage.1.new
.PHONY: diff .PHONY: diff

View file

@ -20,7 +20,6 @@ remove the core-daemon.log file
.BR core-gui(1), .BR core-gui(1),
.BR core-daemon(1), .BR core-daemon(1),
.BR coresendmsg(1), .BR coresendmsg(1),
.BR core-xen-cleanup(1),
.BR vcmd(1), .BR vcmd(1),
.BR vnoded(1) .BR vnoded(1)
.SH BUGS .SH BUGS

View file

@ -1,28 +0,0 @@
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.40.4.
.TH CORE-XEN-CLEANUP "1" "2014-08-06" "CORE-XEN-CLEANUP" "User Commands"
.SH NAME
core-xen-cleanup \- clean-up script for CORE Xen domUs
.SH DESCRIPTION
usage: core\-xen\-cleanup [\-d]
.IP
Clean up all CORE Xen domUs, bridges, interfaces, and session
directories. Options:
.TP
\fB\-h\fR
show this help message and exit
.TP
\fB\-d\fR
also kill the Python daemon
.SH "SEE ALSO"
.BR core-gui(1),
.BR core-daemon(1),
.BR coresendmsg(1),
.BR core-cleanup(1),
.BR vcmd(1),
.BR vnoded(1)
.SH BUGS
Warning! This script will remove logical volumes that match the name "/dev/vg*/c*-n*-" on all volume groups. Use with care.
Report bugs to
.BI core-dev@pf.itd.nrl.navy.mil.

View file

@ -1436,7 +1436,6 @@ Default Services and Node Types
Here are the default node types and their services: Here are the default node types and their services:
.. index:: Xen
.. index:: physical nodes .. index:: physical nodes
* *router* - zebra, OSFPv2, OSPFv3, and IPForward services for IGP * *router* - zebra, OSFPv2, OSPFv3, and IPForward services for IGP
@ -1450,10 +1449,6 @@ Here are the default node types and their services:
* *prouter* - a physical router, having the same default services as the * *prouter* - a physical router, having the same default services as the
*router* node type; for incorporating Linux testbed machines into an *router* node type; for incorporating Linux testbed machines into an
emulation, the :ref:`Machine_Types` is set to :ref:`physical`. emulation, the :ref:`Machine_Types` is set to :ref:`physical`.
* *xen* - a Xen-based router, having the same default services as the
*router* node type; for incorporating Xen domUs into an emulation, the
:ref:`Machine_Types` is set to :ref:`xen`, and different *profiles* are
available.
Configuration files can be automatically generated by each service. For Configuration files can be automatically generated by each service. For
example, CORE automatically generates routing protocol configuration for the example, CORE automatically generates routing protocol configuration for the

View file

@ -30,7 +30,7 @@ array set g_execRequests { shell "" observer "" }
# for a simulator, uncomment this line or cut/paste into debugger: # for a simulator, uncomment this line or cut/paste into debugger:
# set XSCALE 4.0; set YSCALE 4.0; set XOFFSET 1800; set YOFFSET 300 # set XSCALE 4.0; set YSCALE 4.0; set XOFFSET 1800; set YOFFSET 300
array set nodetypes { 0 def 1 phys 2 xen 3 tbd 4 lanswitch 5 hub \ array set nodetypes { 0 def 1 phys 2 tbd 3 tbd 4 lanswitch 5 hub \
6 wlan 7 rj45 8 tunnel 9 ktunnel 10 emane } 6 wlan 7 rj45 8 tunnel 9 ktunnel 10 emane }
array set regtypes { wl 1 mob 2 util 3 exec 4 gui 5 emul 6 } array set regtypes { wl 1 mob 2 util 3 exec 4 gui 5 emul 6 }
@ -470,7 +470,7 @@ proc apiNodeCreate { node vals_ref } {
set nodetype $nodetypes($vals(type)) set nodetype $nodetypes($vals(type))
set nodename $vals(name) set nodename $vals(name)
if { $nodetype == "emane" } { set nodetype "wlan" } ;# special case - EMANE if { $nodetype == "emane" } { set nodetype "wlan" } ;# special case - EMANE
if { $nodetype == "def" || $nodetype == "xen" } { set nodetype "router" } if { $nodetype == "def" } { set nodetype "router" }
newNode [list $nodetype $node] ;# use node number supplied from API message newNode [list $nodetype $node] ;# use node number supplied from API message
setNodeName $node $nodename setNodeName $node $nodename
if { $vals(canv) == "" } { if { $vals(canv) == "" } {
@ -509,7 +509,7 @@ proc apiNodeCreate { node vals_ref } {
set model $vals(model) set model $vals(model)
if { $model != "" && $vals(type) < 4} { if { $model != "" && $vals(type) < 4} {
# set model only for (0 def 1 phys 2 xen 3 tbd) 4 lanswitch # set model only for (0 def 1 phys 2 tbd 3 tbd) 4 lanswitch
setNodeModel $node $model setNodeModel $node $model
if { [lsearch -exact [getNodeTypeNames] $model] == -1 } { if { [lsearch -exact [getNodeTypeNames] $model] == -1 } {
puts "warning: unknown node type '$model' in Node message!" puts "warning: unknown node type '$model' in Node message!"
@ -2920,7 +2920,6 @@ proc getNodeTypeAPI { node } {
jail { return 0x0 } jail { return 0x0 }
OVS { return 0x0 } OVS { return 0x0 }
physical { return 0x1 } physical { return 0x1 }
xen { return 0x2 }
tbd { return 0x3 } tbd { return 0x3 }
lanswitch { return 0x4 } lanswitch { return 0x4 }
hub { return 0x5 } hub { return 0x5 }

View file

@ -2,7 +2,7 @@ comments {
Kitchen Sink Kitchen Sink
============ ============
Contains every type of node available in CORE, except for the Xen and physical (prouter) Contains every type of node available in CORE, except for physical (prouter)
machine types, and nodes distributed on other emulation servers. machine types, and nodes distributed on other emulation servers.
To get the RJ45 node to work, a test0 interface should first be created like this: To get the RJ45 node to work, a test0 interface should first be created like this:

View file

@ -1,60 +0,0 @@
#!/bin/sh
#
# cleanup.sh
#
# Copyright 2005-2013 the Boeing Company.
# See the LICENSE file included in this distribution.
#
# Removes leftover netgraph nodes and vimages from an emulation that
# did not exit properly.
#
ngnodes="pipe eiface hub switch wlan"
vimages=`vimage -l | fgrep -v " " | cut -d: -f 1 | sed s/\"//g`
# shutdown netgraph nodes
for ngn in $ngnodes
do
nodes=`ngctl list | grep $ngn | awk '{print $2}'`
for n in $nodes
do
echo ngctl shutdown $n:
ngctl shutdown $n:
done
done
# kills processes and remove vimages
for vimage in $vimages
do
procs=`vimage $vimage ps x | awk '{print $1}'`
for proc in $procs
do
if [ $proc != "PID" ]
then
echo vimage $vimage kill $proc
vimage $vimage kill $proc
fi
done
loopback=`vimage $vimage ifconfig -a | head -n 1 | awk '{split($1,a,":"); print a[1]}'`
if [ "$loopback" != "" ]
then
addrs=`ifconfig $loopback | grep inet | awk '{print $2}'`
for addr in $addrs
do
echo vimage $vimage ifconfig $loopback $addr -alias
vimage $vimage ifconfig $loopback $addr -alias
if [ $? != 0 ]
then
vimage $vimage ifconfig $loopback inet6 $addr -alias
fi
done
echo vimage $vimage ifconfig $loopback down
vimage $vimage ifconfig $loopback down
fi
vimage $vimage kill -9 -1 2> /dev/null
echo vimage -d $vimage
vimage -d $vimage
done
# clean up temporary area
rm -rf /tmp/pycore.*

View file

@ -35,7 +35,6 @@ TINY_ICONS = tiny/button.play.gif \
tiny/folder.gif \ tiny/folder.gif \
tiny/cel.gif \ tiny/cel.gif \
tiny/fileopen.gif \ tiny/fileopen.gif \
tiny/xen.gif \
tiny/plot.gif tiny/plot.gif
NORM_ICONS = normal/gps-diagram.xbm \ NORM_ICONS = normal/gps-diagram.xbm \
@ -53,8 +52,7 @@ NORM_ICONS = normal/gps-diagram.xbm \
normal/document-properties.gif \ normal/document-properties.gif \
normal/thumb-unknown.gif \ normal/thumb-unknown.gif \
normal/router_purple.gif normal/router_yellow.gif \ normal/router_purple.gif normal/router_yellow.gif \
normal/ap.gif normal/mdr.gif \ normal/ap.gif normal/mdr.gif
normal/xen.gif
SVG_ICONS = svg/ap.svg \ SVG_ICONS = svg/ap.svg \
svg/cel.svg \ svg/cel.svg \
@ -71,8 +69,7 @@ SVG_ICONS = svg/ap.svg \
svg/router_yellow.svg \ svg/router_yellow.svg \
svg/start.svg \ svg/start.svg \
svg/tunnel.svg \ svg/tunnel.svg \
svg/vlan.svg \ svg/vlan.svg
svg/xen.svg
# #
# Icon files (/usr/local/share/core/icons/[tiny,normal,svg]) # Icon files (/usr/local/share/core/icons/[tiny,normal,svg])

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 KiB

View file

@ -1,181 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="146"
height="100"
id="svg13653"
sodipodi:version="0.32"
inkscape:version="0.48.0 r9654"
sodipodi:docname="xen.svg"
version="1.0"
inkscape:export-filename="xen.png"
inkscape:export-xdpi="30.464558"
inkscape:export-ydpi="30.464558">
<defs
id="defs13655">
<inkscape:perspective
sodipodi:type="inkscape:persp3d"
inkscape:vp_x="0 : 99.931252 : 1"
inkscape:vp_y="0 : 1000 : 0"
inkscape:vp_z="199.10001 : 99.931252 : 1"
inkscape:persp3d-origin="99.550003 : 66.620834 : 1"
id="perspective3835" />
<linearGradient
id="linearGradient12828">
<stop
id="stop12830"
offset="0"
style="stop-color:#484849;stop-opacity:1;" />
<stop
style="stop-color:#434344;stop-opacity:1;"
offset="0"
id="stop12862" />
<stop
id="stop12832"
offset="1.0000000"
style="stop-color:#8f8f90;stop-opacity:0.0000000;" />
</linearGradient>
<radialGradient
inkscape:collect="always"
xlink:href="#linearGradient12828"
id="radialGradient13651"
cx="328.57144"
cy="602.7193"
fx="328.57144"
fy="602.7193"
r="147.14285"
gradientTransform="matrix(1,0,0,0.177184,0,495.9268)"
gradientUnits="userSpaceOnUse" />
<linearGradient
id="linearGradient12001">
<stop
style="stop-color:#1b4a78;stop-opacity:1;"
offset="0"
id="stop12003" />
<stop
style="stop-color:#5dacd1;stop-opacity:1;"
offset="1"
id="stop12005" />
</linearGradient>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient12001"
id="linearGradient13633"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.471308,0,0,0.471308,118.8781,123.5182)"
x1="175.71875"
y1="737.01562"
x2="470.00089"
y2="737.01562" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient12001"
id="linearGradient3844"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.471308,0,0,0.471308,-45.6934,-239.9103)"
x1="175.71875"
y1="737.01562"
x2="470.00089"
y2="737.01562" />
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1"
inkscape:cx="118.57814"
inkscape:cy="50.488033"
inkscape:document-units="px"
inkscape:current-layer="layer1"
inkscape:window-width="1280"
inkscape:window-height="949"
inkscape:window-x="1631"
inkscape:window-y="29"
showgrid="false"
inkscape:window-maximized="0" />
<metadata
id="metadata13658">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Capa 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-33.124945,-44.636248)">
<path
style="fill:url(#linearGradient3844);fill-opacity:1;stroke:none"
d="m 37.14136,72.27878 0,0.29457 c 0.006,-0.0975 0.0206,-0.19729 0.0295,-0.29457 l -0.0295,0 z m 138.62351,0 c 0.0302,0.33044 0.0589,0.66821 0.0589,1.00153 l 0,-1.00153 -0.0589,0 z m 0.0589,1.00153 c -1e-5,15.05224 -31.07495,27.26223 -69.35594,27.26223 -37.68286,1e-5 -68.3765,-11.82771 -69.32649,-26.55527 l 0,40.67979 c -0.0151,0.23376 -0.0147,0.45704 -0.0147,0.69223 0,0.22546 8.7e-4,0.45335 0.0147,0.67751 0.91151,14.74102 31.61889,26.59945 69.32649,26.59945 37.7076,0 68.41498,-11.85843 69.32648,-26.59945 l 0.0295,0 0,-0.50077 c 9.5e-4,-0.0587 0,-0.11794 0,-0.17674 0,-0.0588 9.4e-4,-0.11803 0,-0.17674 l 0,-41.90224 z"
id="path13626" />
<path
sodipodi:type="arc"
style="fill:#3a78a0;fill-opacity:1;stroke:none"
id="path11090"
sodipodi:cx="328.57144"
sodipodi:cy="602.7193"
sodipodi:rx="147.14285"
sodipodi:ry="26.071428"
d="m 475.71429,602.7193 c 0,14.39885 -65.87809,26.07143 -147.14285,26.07143 -81.26475,0 -147.14285,-11.67258 -147.14285,-26.07143 0,-14.39885 65.8781,-26.07143 147.14285,-26.07143 81.26476,0 147.14285,11.67258 147.14285,26.07143 z"
transform="matrix(0.471308,0,0,1.045917,-48.3838,-554.9944)" />
<g
id="g13565"
style="fill:#f2fdff;fill-opacity:0.71171169"
transform="matrix(0.84958,0.276715,-0.703617,0.334119,278.6313,-230.2001)">
<path
id="path13507"
d="m 328.66945,592.8253 -5.97867,10.35298 -5.97867,10.35297 6.18436,0 0,21.24074 11.53226,0 0,-21.24074 6.18435,0 -5.97867,-10.35297 -5.96496,-10.35298 z"
style="fill:#f2fdff;fill-opacity:0.71171169;stroke:none" />
<path
id="path13509"
d="m 328.66945,687.10951 -5.97867,-10.35298 -5.97867,-10.35297 6.18436,0 0,-21.24074 11.53226,0 0,21.24074 6.18435,0 -5.97867,10.35297 -5.96496,10.35298 z"
style="fill:#f2fdff;fill-opacity:0.71171169;stroke:none" />
<path
id="path13511"
d="m 333.74751,639.82449 10.35297,-5.97867 10.35297,-5.97867 0,6.18436 21.24074,0 0,11.53225 -21.24074,0 0,6.18436 -10.35297,-5.97867 -10.35297,-5.96496 z"
style="fill:#f2fdff;fill-opacity:0.71171169;stroke:none" />
<path
id="path13513"
d="m 323.35667,639.82449 -10.35297,-5.97867 -10.35298,-5.97867 0,6.18436 -21.24073,0 0,11.53225 21.24073,0 0,6.18436 10.35298,-5.97867 10.35297,-5.96496 z"
style="fill:#f2fdff;fill-opacity:0.71171169;stroke:none" />
</g>
<rect
style="fill:#f2fdff;fill-opacity:0.70980394;stroke:#000000;stroke-opacity:1"
id="rect6161"
width="91.923882"
height="37.476658"
x="52.679455"
y="60.048466"
transform="translate(33.124945,44.636248)"
rx="5.454824"
ry="5.454824" />
<text
xml:space="preserve"
style="font-size:32px;font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;fill:#000000;fill-opacity:1;stroke:none;font-family:DejaVu Sans;-inkscape-font-specification:Bitstream Charter Bold"
x="91.107697"
y="135.0903"
id="text6673"><tspan
sodipodi:role="line"
id="tspan6675"
x="91.107697"
y="135.0903">Xen</tspan></text>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 905 B

View file

@ -26,14 +26,12 @@ array set g_node_types_default {
5 {prouter router_green.gif router_green.gif \ 5 {prouter router_green.gif router_green.gif \
{zebra OSPFv2 OSPFv3 IPForward} \ {zebra OSPFv2 OSPFv3 IPForward} \
physical {built-in type for physical nodes}} physical {built-in type for physical nodes}}
6 {xen xen.gif xen.gif {zebra OSPFv2 OSPFv3 IPForward} \
xen {built-in type for Xen PVM domU router}}
7 {OVS lanswitch.gif lanswitch.gif {DefaultRoute SSH OvsService} OVS {} } 7 {OVS lanswitch.gif lanswitch.gif {DefaultRoute SSH OvsService} OVS {} }
} }
# possible machine types for nodes # possible machine types for nodes
set MACHINE_TYPES "netns physical xen OVS" set MACHINE_TYPES "netns physical OVS"
# array populated from nodes.conf file # array populated from nodes.conf file
array set g_node_types { } array set g_node_types { }
@ -187,7 +185,7 @@ proc getNodeTypeServices { type } {
return "" return ""
} }
# return the machine type (e.g. netns, physical, xen) of the currently selected # return the machine type (e.g. netns, physical) of the currently selected
# node type from the toolbar # node type from the toolbar
proc getNodeTypeMachineType { type } { proc getNodeTypeMachineType { type } {
global MACHINE_TYPES g_node_types global MACHINE_TYPES g_node_types
@ -211,7 +209,7 @@ proc getNodeTypeProfile { type } {
return "" return ""
} }
# return the machine type (e.g. netns, physical, xen) of the currently selected # return the machine type (e.g. netns, physical) of the currently selected
# node type from the toolbar # node type from the toolbar
proc getNodeTypeMachineType { type } { proc getNodeTypeMachineType { type } {
global MACHINE_TYPES g_node_types global MACHINE_TYPES g_node_types
@ -628,7 +626,7 @@ proc nodesConfigServices { wi services_or_profile } {
set sock [lindex [getEmulPlugin "*"] 2] set sock [lindex [getEmulPlugin "*"] 2]
# node number 0 is sent, but these services are not associated with a node # node number 0 is sent, but these services are not associated with a node
if { $services_or_profile == "profile" } { if { $services_or_profile == "profile" } {
set services_or_profile $g_machine_type ;# address the e.g. "xen" model set services_or_profile $g_machine_type ;# address the model
set opaque "$g_machine_type:$g_node_type_services_hint" set opaque "$g_machine_type:$g_node_type_services_hint"
} else { } else {
set opaque "" set opaque ""

View file

@ -612,7 +612,6 @@ proc capTitle { cap } {
# Session options # Session options
# EMANE options # EMANE options
# EMANE model options, per-WLAN/per-interface # EMANE model options, per-WLAN/per-interface
# node profile (Xen machine type)
# #
proc popupCapabilityConfig { channel wlan model types values captions bmp possible_values groups } { proc popupCapabilityConfig { channel wlan model types values captions bmp possible_values groups } {
global node_list g_node_type_services_hint g_popupcap_keys g_prefs global node_list g_node_type_services_hint g_popupcap_keys g_prefs

View file

@ -14,7 +14,6 @@
@datarootdir@/man/man1/core-daemon.1 @datarootdir@/man/man1/core-daemon.1
@datarootdir@/man/man1/coresendmsg.1 @datarootdir@/man/man1/coresendmsg.1
@datarootdir@/man/man1/core-cleanup.1 @datarootdir@/man/man1/core-cleanup.1
@datarootdir@/man/man1/core-xen-cleanup.1
@pyprefix@/lib/python2.7/dist-packages @pyprefix@/lib/python2.7/dist-packages
/etc/init.d /etc/init.d
/etc/logrotate.d /etc/logrotate.d

View file

@ -147,7 +147,6 @@ fi
%{_datadir}/%{name}/icons/normal/thumb-unknown.gif %{_datadir}/%{name}/icons/normal/thumb-unknown.gif
%{_datadir}/%{name}/icons/normal/tunnel.gif %{_datadir}/%{name}/icons/normal/tunnel.gif
%{_datadir}/%{name}/icons/normal/wlan.gif %{_datadir}/%{name}/icons/normal/wlan.gif
%{_datadir}/%{name}/icons/normal/xen.gif
%dir %{_datadir}/%{name}/icons/svg %dir %{_datadir}/%{name}/icons/svg
%{_datadir}/%{name}/icons/svg/ap.svg %{_datadir}/%{name}/icons/svg/ap.svg
%{_datadir}/%{name}/icons/svg/cel.svg %{_datadir}/%{name}/icons/svg/cel.svg
@ -165,7 +164,6 @@ fi
%{_datadir}/%{name}/icons/svg/start.svg %{_datadir}/%{name}/icons/svg/start.svg
%{_datadir}/%{name}/icons/svg/tunnel.svg %{_datadir}/%{name}/icons/svg/tunnel.svg
%{_datadir}/%{name}/icons/svg/vlan.svg %{_datadir}/%{name}/icons/svg/vlan.svg
%{_datadir}/%{name}/icons/svg/xen.svg
%dir %{_datadir}/%{name}/icons/tiny %dir %{_datadir}/%{name}/icons/tiny
%{_datadir}/%{name}/icons/tiny/ap.gif %{_datadir}/%{name}/icons/tiny/ap.gif
%{_datadir}/%{name}/icons/tiny/arrow.down.gif %{_datadir}/%{name}/icons/tiny/arrow.down.gif
@ -219,7 +217,6 @@ fi
%{_datadir}/%{name}/icons/tiny/twonode.gif %{_datadir}/%{name}/icons/tiny/twonode.gif
%{_datadir}/%{name}/icons/tiny/view-refresh.gif %{_datadir}/%{name}/icons/tiny/view-refresh.gif
%{_datadir}/%{name}/icons/tiny/wlan.gif %{_datadir}/%{name}/icons/tiny/wlan.gif
%{_datadir}/%{name}/icons/tiny/xen.gif
@CORE_LIB_DIR@/initgui.tcl @CORE_LIB_DIR@/initgui.tcl
@CORE_LIB_DIR@/ipv4.tcl @CORE_LIB_DIR@/ipv4.tcl
@CORE_LIB_DIR@/ipv6.tcl @CORE_LIB_DIR@/ipv6.tcl
@ -259,7 +256,6 @@ fi
%files daemon %files daemon
%config @CORE_CONF_DIR@/core.conf %config @CORE_CONF_DIR@/core.conf
%config @CORE_CONF_DIR@/perflogserver.conf %config @CORE_CONF_DIR@/perflogserver.conf
%config @CORE_CONF_DIR@/xen.conf
%dir %{_datadir}/%{name} %dir %{_datadir}/%{name}
%dir %{_datadir}/%{name}/examples %dir %{_datadir}/%{name}/examples
%{_datadir}/%{name}/examples/controlnet_updown %{_datadir}/%{name}/examples/controlnet_updown
@ -308,7 +304,6 @@ fi
%doc %{_mandir}/man1/core-daemon.1.gz %doc %{_mandir}/man1/core-daemon.1.gz
%doc %{_mandir}/man1/core-manage.1.gz %doc %{_mandir}/man1/core-manage.1.gz
%doc %{_mandir}/man1/coresendmsg.1.gz %doc %{_mandir}/man1/coresendmsg.1.gz
%doc %{_mandir}/man1/core-xen-cleanup.1.gz
%doc %{_mandir}/man1/netns.1.gz %doc %{_mandir}/man1/netns.1.gz
%doc %{_mandir}/man1/vcmd.1.gz %doc %{_mandir}/man1/vcmd.1.gz
%doc %{_mandir}/man1/vnoded.1.gz %doc %{_mandir}/man1/vnoded.1.gz
@ -393,15 +388,10 @@ fi
%{python_sitelib}/core/services/utility.py* %{python_sitelib}/core/services/utility.py*
%{python_sitelib}/core/services/xorp.py* %{python_sitelib}/core/services/xorp.py*
%{python_sitelib}/core/session.py* %{python_sitelib}/core/session.py*
%dir %{python_sitelib}/core/xen
%{python_sitelib}/core/xen/__init__.py*
%{python_sitelib}/core/xen/xenconfig.py*
%{python_sitelib}/core/xen/xen.py*
%{_sbindir}/core-cleanup %{_sbindir}/core-cleanup
%{_sbindir}/core-daemon %{_sbindir}/core-daemon
%{_sbindir}/core-manage %{_sbindir}/core-manage
%{_sbindir}/coresendmsg %{_sbindir}/coresendmsg
%{_sbindir}/core-xen-cleanup
%{_sbindir}/netns %{_sbindir}/netns
%{_sbindir}/vcmd %{_sbindir}/vcmd
%{_sbindir}/vnoded %{_sbindir}/vnoded

View file

@ -9,20 +9,15 @@
CLEANFILES = core-daemon CLEANFILES = core-daemon
DISTCLEANFILES = Makefile.in xen/Makefile xen/Makefile.in DISTCLEANFILES = Makefile.in
EXTRA_DIST = core-daemon-init.d \ EXTRA_DIST = core-daemon-init.d \
core-daemon.service.in \ core-daemon.service.in \
core-daemon-rc.d \ core-daemon-rc.d \
core-daemon-init.d-SUSE \ core-daemon-init.d-SUSE
xen
SUBDIRS = perf SUBDIRS = perf
# clean up dirs included by EXTRA_DIST
dist-hook:
rm -rf $(distdir)/xen/.svn
# install startup scripts based on --with-startup=option configure option # install startup scripts based on --with-startup=option configure option
# init.d (default), systemd, SUSE # init.d (default), systemd, SUSE
if WANT_INITD if WANT_INITD

View file

@ -35,8 +35,8 @@
# #
### BEGIN INIT INFO ### BEGIN INIT INFO
# Provides: core-daemon # Provides: core-daemon
# Required-Start: $network $remote_fs xend # Required-Start: $network $remote_fs
# Required-Stop: $network $remote_fs xend # Required-Stop: $network $remote_fs
# Default-Start: 3 5 # Default-Start: 3 5
# Default-Stop: 0 1 2 6 # Default-Stop: 0 1 2 6
# Short-Description: core-daemon # Short-Description: core-daemon

View file

@ -1,15 +0,0 @@
# CORE
# (c)2012 the Boeing Company.
# See the LICENSE file included in this distribution.
#
# author: Jeff Ahrenholz <jeffrey.m.ahrenholz@boeing.com>
#
# Makefile for installing Xen scripts.
#
install-exec-hook:
test -d "$(DESTDIR)/etc/init.d" || \
mkdir -p $(DESTDIR)/etc/init.d
test -d "$(DESTDIR)/etc/xen/scripts" && \
cp -f linux/scripts/vif-core $(DESTDIR)/etc/xen/scripts

View file

@ -1,48 +0,0 @@
#!/bin/bash
#============================================================================
# ${XEN_SCRIPT_DIR}/vif-bridge
#
# Script for configuring a vif in bridged mode.
# The hotplugging system will call this script if it is specified either in
# the device configuration given to Xend, or the default Xend configuration
# in ${XEN_CONFIG_DIR}/xend-config.sxp. If the script is specified in
# neither of those places, then this script is the default.
#
# Usage:
# vif-bridge (add|remove|online|offline)
#
# Environment vars:
# vif vif interface name (required).
# XENBUS_PATH path to this device's details in the XenStore (required).
#
# Read from the store:
# bridge bridge to add the vif to (optional). Defaults to searching for the
# bridge itself.
# ip list of IP networks for the vif, space-separated (optional).
#
# up:
# Enslaves the vif interface to the bridge and adds iptables rules
# for its ip addresses (if any).
#
# down:
# Removes the vif interface from the bridge and removes the iptables
# rules for its ip addresses (if any).
#============================================================================
dir=$(dirname "$0")
. "$dir/vif-common.sh"
case "$command" in
online)
do_without_error ifconfig "$vif" up
;;
offline)
do_without_error ifconfig "$vif" down
;;
esac
log debug "Successful vif-core $command for $vif, bridge $bridge."
if [ "$command" == "online" ]
then
success
fi