Merge branch 'rel/5.1'

This commit is contained in:
bharnden 2018-05-22 20:44:26 -07:00
commit c3d0b01b7f
293 changed files with 6907 additions and 34130 deletions

View file

@ -13,9 +13,9 @@ SUBDIRS = man figures
# extra cruft to remove
DISTCLEANFILES = Makefile.in stamp-vti
rst_files = conf.py constants.txt credits.rst ctrlnet.rst devguide.rst \
rst_files = conf.py.in constants.txt credits.rst ctrlnet.rst devguide.rst \
emane.rst index.rst install.rst intro.rst machine.rst \
ns3.rst performance.rst scripting.rst usage.rst
ns3.rst performance.rst scripting.rst usage.rst requirements.txt
EXTRA_DIST = $(rst_files)
@ -26,7 +26,7 @@ EXTRA_DIST = $(rst_files)
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXOPTS = -q
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build

View file

@ -26,7 +26,7 @@ import sphinx_rtd_theme
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.pngmath', 'sphinx.ext.ifconfig']
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.imgmath', 'sphinx.ext.ifconfig']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -42,16 +42,16 @@ master_doc = 'index'
# General information about the project.
project = u'CORE'
copyright = u'2017, core-dev'
copyright = u'2005-2018, core-dev'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '@CORE_VERSION@'
version = '@PACKAGE_VERSION@'
# The full version, including alpha/beta/rc tags.
release = '@CORE_VERSION@'
release = '@PACKAGE_VERSION@'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@ -223,7 +223,7 @@ man_pages = [
epub_title = u'CORE'
epub_author = u'core-dev'
epub_publisher = u'core-dev'
epub_copyright = u'2017, core-dev'
epub_copyright = u'2005-2018, core-dev'
# The language of the text. It defaults to the language option
# or en if the language is not set.

View file

@ -4,15 +4,13 @@
.. |CENTOSVERSION| replace:: 6.x or 7.x
.. |BSDVERSION| replace:: 9.0
.. |CORERPM| replace:: 1.fc20.x86_64.rpm
.. |CORERPM2| replace:: 1.fc20.noarch.rpm
.. |COREDEB| replace:: 0ubuntu1_precise_amd64.deb
.. |COREDEB2| replace:: 0ubuntu1_precise_all.deb
.. |QVER| replace:: quagga-0.99.21mr2.2
.. |QVERDEB| replace:: quagga-mr_0.99.21mr2.2_amd64.deb
.. |QVERDEB| replace:: quagga-mr_0.99.21mr2.2_amd64.deb
.. |QVERRPM| replace:: quagga-0.99.21mr2.2-1.fc16.x86_64.rpm
.. |APTDEPS| replace:: bash bridge-utils ebtables iproute libev-dev python
@ -20,6 +18,6 @@
.. |APTDEPS3| replace:: autoconf automake gcc libev-dev make python-dev libreadline-dev pkg-config imagemagick help2man
.. |YUMDEPS| replace:: bash bridge-utils ebtables iproute libev python procps-ng net-tools
.. |YUMDEPS2| replace:: tcl tk tkimg
.. |YUMDEPS2| replace:: tcl tk tkimg
.. |YUMDEPS3| replace:: autoconf automake make libev-devel python-devel ImageMagick help2man

View file

@ -33,14 +33,10 @@ These are being actively developed as of CORE |version|:
* *gui* - Tcl/Tk GUI. This uses Tcl/Tk because of its roots with the IMUNES
project.
* *daemon* - Python modules are found in the :file:`daemon/core` directory, the
daemon under :file:`daemon/sbin/core-daemon`, and Python extension modules for
Linux Network Namespace support are in :file:`daemon/src`.
daemon under :file:`daemon/scripts/core-daemon`
* *netns* - Python extension modules for Linux Network Namespace support are in :file:`netns`.
* *ns3* - Python ns3 script support for running CORE.
* *doc* - Documentation for the manual lives here in reStructuredText format.
* *packaging* - Control files and script for building CORE packages are here.
These directories are not so actively developed:
* *kernel* - patches and modules mostly related to FreeBSD.
.. _The_CORE_API:
@ -58,8 +54,7 @@ The GUI communicates with the CORE daemon using the API. One emulation server
communicates with another using the API. The API also allows other systems to
interact with the CORE emulation. The API allows another system to add, remove,
or modify nodes and links, and enables executing commands on the emulated
systems. On FreeBSD, the API is used for enhancing the wireless LAN
calculations. Wireless link parameters are updated on-the-fly based on node
systems. Wireless link parameters are updated on-the-fly based on node
positions.
CORE listens on a local TCP port for API messages. The other system could be
@ -87,7 +82,7 @@ The *vnoded* daemon is the program used to create a new namespace, and
listen on a control channel for commands that may instantiate other processes.
This daemon runs as PID 1 in the container. It is launched automatically by
the CORE daemon. The control channel is a UNIX domain socket usually named
:file:`/tmp/pycore.23098/n3`, for node 3 running on CORE
:file:`/tmp/pycore.23098/n3`, for node 3 running on CORE
session 23098, for example. Root privileges are required for creating a new
namespace.
@ -106,13 +101,13 @@ using a command such as:
::
gnome-terminal -e vcmd -c /tmp/pycore.50160/n1 -- bash
Similarly, the IPv4 routes Observer Widget will run a command to display the routing table using a command such as:
::
vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro
.. index:: core-cleanup
@ -138,7 +133,7 @@ network namespace emulation.
tc qdisc show
# view the rules that make the wireless LAN work
ebtables -L
Below is a transcript of creating two emulated nodes and connecting them together with a wired link:
@ -178,156 +173,8 @@ Below is a transcript of creating two emulated nodes and connecting them togethe
# display connectivity and ping from node 1 to node 2
brctl show
vcmd -c /tmp/n1.ctl -- ping 10.0.0.2
The above example script can be found as :file:`twonodes.sh` in the
:file:`examples/netns` directory. Use *core-cleanup* to clean up after the
script.
.. _FreeBSD_Commands:
FreeBSD Commands
================
.. index:: vimage
.. index:: ngctl
.. index:: Netgraph
.. _FreeBSD_Kernel_Commands:
FreeBSD Kernel Commands
-----------------------
The FreeBSD kernel emulation controlled by CORE is realized through several
userspace commands. The CORE GUI itself could be thought of as a glorified
script that dispatches these commands to build and manage the kernel emulation.
* **vimage** - the vimage command, short for "virtual image", is used to
create lightweight virtual machines and execute commands within the virtual
image context. On a FreeBSD CORE machine, see the *vimage(8)* man page for
complete details. The vimage command comes from the VirtNet project which
virtualizes the FreeBSD network stack.
* **ngctl** - the ngctl command, short for "netgraph control", creates
Netgraph nodes and hooks, connects them together, and allows for various
interactions with the Netgraph nodes. See the *ngctl(8)* man page for
complete details. The ngctl command is built-in to FreeBSD because the
Netgraph system is part of the kernel.
Both commands must be run as root.
Some example usage of the *vimage* command follows below.
::
vimage # displays the current virtual image
vimage -l # lists running virtual images
vimage e0_n0 ps aux # list the processes running on node 0
for i in 1 2 3 4 5
do # execute a command on all nodes
vimage e0_n$i sysctl -w net.inet.ip.redirect=0
done
The *ngctl* command is more complex, due to the variety of Netgraph nodes
available and each of their options.
::
ngctl l # list active Netgraph nodes
ngctl show e0_n8: # display node hook information
ngctl msg e0_n0-n1: getstats # get pkt count statistics from a pipe node
ngctl shutdown \\[0x0da3\\]: # shut down unnamed node using hex node ID
There are many other combinations of commands not shown here. See the online
manual (man) pages for complete details.
Below is a transcript of creating two emulated nodes, `router0` and `router1`,
and connecting them together with a link:
.. index:: create nodes from command-line
.. index:: command-line
::
# create node 0
vimage -c e0_n0
vimage e0_n0 hostname router0
ngctl mkpeer eiface ether ether
vimage -i e0_n0 ngeth0 eth0
vimage e0_n0 ifconfig eth0 link 40:00:aa:aa:00:00
vimage e0_n0 ifconfig lo0 inet localhost
vimage e0_n0 sysctl net.inet.ip.forwarding=1
vimage e0_n0 sysctl net.inet6.ip6.forwarding=1
vimage e0_n0 ifconfig eth0 mtu 1500
# create node 1
vimage -c e0_n1
vimage e0_n1 hostname router1
ngctl mkpeer eiface ether ether
vimage -i e0_n1 ngeth1 eth0
vimage e0_n1 ifconfig eth0 link 40:00:aa:aa:0:1
vimage e0_n1 ifconfig lo0 inet localhost
vimage e0_n1 sysctl net.inet.ip.forwarding=1
vimage e0_n1 sysctl net.inet6.ip6.forwarding=1
vimage e0_n1 ifconfig eth0 mtu 1500
# create a link between n0 and n1
ngctl mkpeer eth0@e0_n0: pipe ether upper
ngctl name eth0@e0_n0:ether e0_n0-n1
ngctl connect e0_n0-n1: eth0@e0_n1: lower ether
ngctl msg e0_n0-n1: setcfg \\
{{ bandwidth=100000000 delay=0 upstream={ BER=0 dupl
icate=0 } downstream={ BER=0 duplicate=0 } }}
ngctl msg e0_n0-n1: setcfg {{ downstream={ fifo=1 } }}
ngctl msg e0_n0-n1: setcfg {{ downstream={ droptail=1 } }}
ngctl msg e0_n0-n1: setcfg {{ downstream={ queuelen=50 } }}
ngctl msg e0_n0-n1: setcfg {{ upstream={ fifo=1 } }}
ngctl msg e0_n0-n1: setcfg {{ upstream={ droptail=1 } }}
ngctl msg e0_n0-n1: setcfg {{ upstream={ queuelen=50 } }}
Other FreeBSD commands that may be of interest:
.. index:: FreeBSD commands
* **kldstat**, **kldload**, **kldunload** - list, load, and unload
FreeBSD kernel modules
* **sysctl** - display and modify various pieces of kernel state
* **pkg_info**, **pkg_add**, **pkg_delete** - list, add, or remove
FreeBSD software packages.
* **vtysh** - start a Quagga CLI for router configuration
Netgraph Nodes
--------------
.. index:: Netgraph
.. index:: Netgraph nodes
Each Netgraph node implements a protocol or processes data in some well-defined
manner (see the `netgraph(4)` man page). The netgraph source code is located
in `/usr/src/sys/netgraph`. There you might discover additional nodes that
implement some desired functionality, that have not yet been included in CORE.
Using certain kernel commands, you can likely include these types of nodes into
your CORE emulation.
The following Netgraph nodes are used by CORE:
* **ng_bridge** - switch node performs Ethernet bridging
* **ng_cisco** - Cisco HDLC serial links
* **ng_eiface** - virtual Ethernet interface that is assigned to each virtual machine
* **ng_ether** - physical Ethernet devices, used by the RJ45 tool
* **ng_hub** - hub node
* **ng_pipe** - used for wired Ethernet links, imposes packet delay, bandwidth restrictions, and other link characteristics
* **ng_socket** - socket used by *ngctl* utility
* **ng_wlan** - wireless LAN node

View file

@ -9,14 +9,14 @@
Installation
************
This chapter describes how to set up a CORE machine. Note that the easiest
This chapter describes how to set up a CORE machine. Note that the easiest
way to install CORE is using a binary
package on Ubuntu or Fedora (deb or rpm) using the distribution's package
manager
to automatically install dependencies, see :ref:`Installing_from_Packages`.
Ubuntu and Fedora Linux are the recommended distributions for running CORE. Ubuntu |UBUNTUVERSION| and Fedora |FEDORAVERSION| ship with kernels with support for namespaces built-in. They support the latest hardware. However,
these distributions are not strictly required. CORE will likely work on other
these distributions are not strictly required. CORE will likely work on other
flavors of Linux, see :ref:`Installing_from_Source`.
The primary dependencies are Tcl/Tk (8.5 or newer) for the GUI, and Python 2.6 or 2.7 for the CORE daemon.
@ -25,30 +25,23 @@ The primary dependencies are Tcl/Tk (8.5 or newer) for the GUI, and Python 2.6 o
.. index:: paths
.. index:: install paths
CORE files are installed to the following directories. When installing from
source, the :file:`/usr/local` prefix is used in place of :file:`/usr` by
default.
============================================= =================================
Install Path Description
============================================= =================================
:file:`/usr/bin/core-gui` GUI startup command
:file:`/usr/sbin/core-daemon` Daemon startup command
:file:`/usr/sbin/` Misc. helper commands/scripts
:file:`/usr/lib/core` GUI files
:file:`/usr/lib/python2.7/dist-packages/core` Python modules for daemon/scripts
:file:`/etc/core/` Daemon configuration files
:file:`~/.core/` User-specific GUI preferences and scenario files
:file:`/usr/share/core/` Example scripts and scenarios
:file:`/usr/share/man/man1/` Command man pages
:file:`/etc/init.d/core-daemon` System startup script for daemon
============================================= =================================
Under Fedora, :file:`/site-packages/` is used instead of :file:`/dist-packages/`
for the Python modules, and :file:`/etc/systemd/system/core-daemon.service`
instead of :file:`/etc/init.d/core-daemon` for the system startup script.
CORE files are installed to the following directories.
======================================================= =================================
Install Path Description
======================================================= =================================
:file:`/usr/local/bin/core-gui` GUI startup command
:file:`/usr/local/bin/core-daemon` Daemon startup command
:file:`/usr/local/bin/` Misc. helper commands/scripts
:file:`/usr/local/lib/core` GUI files
:file:`/usr/local/lib/python2.7/dist-packages/core` Python modules for daemon/scripts
:file:`/etc/core/` Daemon configuration files
:file:`~/.core/` User-specific GUI preferences and scenario files
:file:`/usr/local/share/core/` Example scripts and scenarios
:file:`/usr/local/share/man/man1/` Command man pages
:file:`/etc/init.d/core-daemon` SysV startup script for daemon
:file:`/etc/systemd/system/core-daemon.service` Systemd startup script for daemon
======================================================= =================================
.. _Prerequisites:
@ -57,7 +50,7 @@ Prerequisites
.. index:: Prerequisites
The Linux or FreeBSD operating system is required. The GUI uses the Tcl/Tk scripting toolkit, and the CORE daemon require Python. Details of the individual software packages required can be found in the installation steps.
A Linux operating system is required. The GUI uses the Tcl/Tk scripting toolkit, and the CORE daemon requires Python. Details of the individual software packages required can be found in the installation steps.
.. _Required_Hardware:
@ -68,7 +61,7 @@ Required Hardware
.. index:: System requirements
Any computer capable of running Linux or FreeBSD should be able to run CORE. Since the physical machine will be hosting numerous virtual machines, as a general rule you should select a machine having as much RAM and CPU resources as possible.
Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous virtual machines, as a general rule you should select a machine having as much RAM and CPU resources as possible.
A *general recommendation* would be:
@ -77,9 +70,9 @@ A *general recommendation* would be:
* about 3 MB of free disk space (plus more for dependency packages such as Tcl/Tk)
* X11 for the GUI, or remote X11 over SSH
The computer can be a laptop, desktop, or rack-mount server. A keyboard, mouse,
The computer can be a laptop, desktop, or rack-mount server. A keyboard, mouse,
and monitor are not required if a network connection is available
for remotely accessing the machine. A 3D accelerated graphics card
for remotely accessing the machine. A 3D accelerated graphics card
is not required.
.. _Required_Software:
@ -87,18 +80,13 @@ is not required.
Required Software
-----------------
CORE requires the Linux or FreeBSD operating systems because it uses virtualization provided by the kernel. It does not run on the Windows or Mac OS X operating systems (unless it is running within a virtual machine guest.) There are two
different virtualization technologies that CORE can currently use:
Linux network namespaces and FreeBSD jails,
CORE requires a Linux operating systems because it uses virtualization provided by the kernel. It does not run on the Windows or Mac OS X operating systems (unless it is running within a virtual machine guest.)
The virtualization technology that CORE currently uses:
Linux network namespaces,
see :ref:`How_Does_it_Work?` for virtualization details.
**Linux network namespaces is the recommended platform.** Development is focused here and it supports the latest features. It is the easiest to install because there is no need to patch, install, and run a special Linux kernel.
FreeBSD |BSDVERSION|-RELEASE may offer the best scalability. If your
applications run under FreeBSD and you are comfortable with that platform,
this may be a good choice. Device and application support by BSD
may not be as extensive as Linux.
The CORE GUI requires the X.Org X Window system (X11), or can run over a
remote X11 session. For specific Tcl/Tk, Python, and other libraries required
to run CORE, refer to the :ref:`Installation` section.
@ -120,8 +108,9 @@ Installing from Packages
The easiest way to install CORE is using the pre-built packages. The package
managers on Ubuntu or Fedora will
automatically install dependencies for you.
You can obtain the CORE packages from the `CORE downloads <http://downloads.pf.itd.nrl.navy.mil/core/packages/>`_ page.
automatically install dependencies for you.
You can obtain the CORE packages from the `CORE downloads <http://downloads.pf.itd.nrl.navy.mil/core/packages/>`_ page
or `CORE GitHub <https://github.com/coreemu/core/releases>`_.
.. _Installing_from_Packages_on_Ubuntu:
@ -130,41 +119,11 @@ Installing from Packages on Ubuntu
First install the Ubuntu |UBUNTUVERSION| operating system.
.. tip::
With Debian or Ubuntu 14.04 (trusty) and newer, you can simply install
CORE using the following command::
sudo apt-get install core-network
Proceed to the "Install Quagga for routing." line below to install Quagga.
The other commands shown in this section apply to binary packages
downloaded from the CORE website instead of using the Debian/Ubuntu
repositories.
.. NOTE::
Linux package managers (e.g. `software-center`, `yum`) will take care
of installing the dependencies for you when you use the CORE packages.
You do not need to manually use these installation lines. You do need
to select which Quagga package to use.
* **Optional:** install the prerequisite packages (otherwise skip this
step and have the package manager install them for you.)
.. parsed-literal::
# make sure the system is up to date; you can also use synaptic or
# update-manager instead of apt-get update/dist-upgrade
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install |APTDEPS| |APTDEPS2|
* Install Quagga for routing. If you plan on working with wireless
networks, we recommend
networks, we recommend
installing
`OSPF MDR <http://www.nrl.navy.mil/itd/ncs/products/ospf-manet>`__
(replace `amd64` below with `i386` if needed
(replace `amd64` below with `i386` if needed
to match your architecture):
.. parsed-literal::
@ -178,7 +137,7 @@ First install the Ubuntu |UBUNTUVERSION| operating system.
::
sudo apt-get install quagga
* Install the CORE deb packages for Ubuntu, using a GUI that automatically
resolves dependencies (note that the absolute path to the deb file
must be used with ``software-center``):
@ -187,24 +146,24 @@ First install the Ubuntu |UBUNTUVERSION| operating system.
software-center /home/user/Downloads/core-daemon\_\ |version|-|COREDEB|
software-center /home/user/Downloads/core-gui\_\ |version|-|COREDEB2|
or install from command-line:
.. parsed-literal::
sudo dpkg -i core-daemon\_\ |version|-|COREDEB|
sudo dpkg -i core-gui\_\ |version|-|COREDEB2|
* Start the CORE daemon as root.
::
sudo /etc/init.d/core-daemon start
* Run the CORE GUI as a normal user:
::
core-gui
After running the ``core-gui`` command, a GUI should appear with a canvas
for drawing topologies. Messages will print out on the console about
@ -223,7 +182,7 @@ examples below, replace with `i686` is using a 32-bit architecture. Also,
Fedora release number.
* **CentOS only:** in order to install the `libev` and `tkimg` prerequisite
packages, you
packages, you
first need to install the `EPEL <http://fedoraproject.org/wiki/EPEL>`_ repo
(Extra Packages for Enterprise Linux):
@ -235,7 +194,7 @@ Fedora release number.
* **CentOS 7.x only:** as of this writing, the `tkimg` prerequisite package
is missing from EPEL 7.x, but the EPEL 6.x package can be manually installed
from
from
`here <http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/tkimg.html>`_
::
@ -255,7 +214,7 @@ Fedora release number.
yum install |YUMDEPS| |YUMDEPS2|
* **Optional (Fedora 17+):** Fedora 17 and newer have an additional
* **Optional (Fedora 17+):** Fedora 17 and newer have an additional
prerequisite providing the required netem kernel modules (otherwise
skip this step and have the package manager install it for you.)
@ -278,23 +237,23 @@ Fedora release number.
::
yum install quagga
* Install the CORE RPM packages for Fedora and automatically resolve
dependencies:
.. parsed-literal::
yum localinstall core-daemon-|version|-|CORERPM| --nogpgcheck
yum localinstall python-core_|service|-|version|-|CORERPM| --nogpgcheck
yum localinstall core-gui-|version|-|CORERPM2| --nogpgcheck
or install from the command-line:
.. parsed-literal::
rpm -ivh core-daemon-|version|-|CORERPM|
rpm -ivh python-core_|service|-|version|-|CORERPM|
rpm -ivh core-gui-|version|-|CORERPM2|
* Turn off SELINUX by setting ``SELINUX=disabled`` in the :file:`/etc/sysconfig/selinux` file, and adding ``selinux=0`` to the kernel line in
your :file:`/etc/grub.conf` file; on Fedora 15 and newer, disable sandboxd using ``chkconfig sandbox off``;
@ -310,12 +269,12 @@ Fedora release number.
systemctl start core-daemon.service
# or for CentOS:
/etc/init.d/core-daemon start
* Run the CORE GUI as a normal user:
::
core-gui
After running the ``core-gui`` command, a GUI should appear with a canvas
for drawing topologies. Messages will print out on the console about
@ -341,11 +300,11 @@ These packages are not required for normal binary package installs.
sudo apt-get install |APTDEPS| \\
|APTDEPS2| \\
|APTDEPS3|
You can obtain the CORE source from the `CORE source <http://downloads.pf.itd.nrl.navy.mil/core/source/>`_ page. Choose either a stable release version or
the development snapshot available in the `nightly_snapshots` directory.
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
the development snapshot available in the `nightly_snapshots` directory.
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
builds on multi-core systems.
.. parsed-literal::
@ -356,9 +315,9 @@ builds on multi-core systems.
./configure
make -j8
sudo make install
The CORE Manual documentation is built separately from the :file:`doc/`
The CORE Manual documentation is built separately from the :file:`doc/`
sub-directory in the source. It requires Sphinx:
.. parsed-literal::
@ -382,16 +341,16 @@ These packages are not required for normal binary package installs.
yum install |YUMDEPS| \\
|YUMDEPS2| \\
|YUMDEPS3|
.. NOTE::
For a minimal X11 installation, also try these packages::
yum install xauth xterm urw-fonts
You can obtain the CORE source from the `CORE source <http://downloads.pf.itd.nrl.navy.mil/core/source/>`_ page. Choose either a stable release version or
the development snapshot available in the :file:`nightly_snapshots` directory.
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
builds on multi-core systems. Notice the ``configure`` flag to tell the build
system that a systemd service file should be installed under Fedora.
@ -403,18 +362,12 @@ system that a systemd service file should be installed under Fedora.
./configure --with-startup=systemd
make -j8
sudo make install
Note that the Linux RPM and Debian packages do not use the ``/usr/local``
prefix, and files are instead installed to ``/usr/sbin``, and
``/usr/lib``. This difference is a result of aligning with the directory
structure of Linux packaging systems and FreeBSD ports packaging.
Another note is that the Python distutils in Fedora Linux will install the CORE
Python modules to :file:`/usr/lib/python2.7/site-packages/core`, instead of
using the :file:`dist-packages` directory.
The CORE Manual documentation is built separately from the :file:`doc/`
The CORE Manual documentation is built separately from the :file:`doc/`
sub-directory in the source. It requires Sphinx:
.. parsed-literal::
@ -444,7 +397,7 @@ CentOS/EL6 does not use the systemd service file, so the `configure` option
`--with-startup=systemd` should be omitted:
::
./configure
@ -454,168 +407,18 @@ CentOS/EL6 does not use the systemd service file, so the `configure` option
Installing from Source on SUSE
------------------------------
To build CORE from source on SUSE or OpenSUSE,
To build CORE from source on SUSE or OpenSUSE,
use the similar instructions shown in :ref:`Installing_from_Source_on_Fedora`,
except that the following `configure` option should be used:
::
./configure --with-startup=suse
This causes a separate init script to be installed that is tailored towards SUSE systems.
The `zypper` command is used instead of `yum`.
For OpenSUSE/Xen based installations, refer to the `README-Xen` file included
in the CORE source.
.. _Installing_from_Source_on_FreeBSD:
Installing from Source on FreeBSD
---------------------------------
.. index:: kernel patch
**Rebuilding the FreeBSD Kernel**
The FreeBSD kernel requires a small patch to allow per-node directories in the
filesystem. Also, the `VIMAGE` build option needs to be turned on to enable
jail-based network stack virtualization. The source code for the FreeBSD
kernel is located in :file:`/usr/src/sys`.
Instructions below will use the :file:`/usr/src/sys/amd64` architecture
directory, but the directory :file:`/usr/src/sys/i386` should be substituted
if you are using a 32-bit architecture.
The kernel patch is available from the CORE source tarball under core-|version|/kernel/symlinks-8.1-RELEASE.diff. This patch applies to the
FreeBSD 8.x or 9.x kernels.
.. parsed-literal::
cd /usr/src/sys
# first you can check if the patch applies cleanly using the '-C' option
patch -p1 -C < ~/core-|version|/kernel/symlinks-8.1-RELEASE.diff
# without '-C' applies the patch
patch -p1 < ~/core-|version|/kernel/symlinks-8.1-RELEASE.diff
A kernel configuration file named :file:`CORE` can be found within the source tarball: core-|version|/kernel/freebsd8-config-CORE. The config is valid for
FreeBSD 8.x or 9.x kernels.
The contents of this configuration file are shown below; you can edit it to suit your needs.
::
# this is the FreeBSD 9.x kernel configuration file for CORE
include GENERIC
ident CORE
options VIMAGE
nooptions SCTP
options IPSEC
device crypto
options IPFIREWALL
options IPFIREWALL_DEFAULT_TO_ACCEPT
The kernel configuration file can be linked or copied to the kernel source directory. Use it to configure and build the kernel:
.. parsed-literal::
cd /usr/src/sys/amd64/conf
cp ~/core-|version|/kernel/freebsd8-config-CORE CORE
config CORE
cd ../compile/CORE
make cleandepend && make depend
make -j8 && make install
Change the number 8 above to match the number of CPU cores you have times two.
Note that the ``make install`` step will move your existing kernel to
``/boot/kernel.old`` and removes that directory if it already exists. Reboot to
enable this new patched kernel.
**Building CORE from Source on FreeBSD**
Here are the prerequisite packages from the FreeBSD ports system:
::
pkg_add -r tk85
pkg_add -r libimg
pkg_add -r bash
pkg_add -r libev
pkg_add -r sudo
pkg_add -r python
pkg_add -r autotools
pkg_add -r gmake
Note that if you are installing to a bare FreeBSD system and want to SSH with X11 forwarding to that system, these packages will help:
::
pkg_add -r xauth
pkg_add -r xorg-fonts
The ``sudo`` package needs to be configured so a normal user can run the CORE
GUI using the command ``core-gui`` (opening a shell window on a node uses a
command such as ``sudo vimage n1``.)
On FreeBSD, the CORE source is built using autotools and gmake:
.. parsed-literal::
tar xzf core-|version|.tar.gz
cd core-|version|
./bootstrap.sh
./configure
gmake -j8
sudo gmake install
Build and install the ``vimage`` utility for controlling virtual images. The source can be obtained from `FreeBSD SVN <http://svn.freebsd.org/viewvc/base/head/tools/tools/vimage/>`_, or it is included with the CORE source for convenience:
.. parsed-literal::
cd core-|version|/kernel/vimage
make
make install
.. index:: FreeBSD; kernel modules
.. index:: kernel modules
.. index:: ng_wlan and ng_pipe
On FreeBSD you should also install the CORE kernel modules for wireless emulation. Perform this step after you have recompiled and installed FreeBSD kernel.
.. parsed-literal::
cd core-|version|/kernel/ng_pipe
make
sudo make install
cd ../ng_wlan
make
sudo make install
The :file:`ng_wlan` kernel module allows for the creation of WLAN nodes. This
is a modified :file:`ng_hub` Netgraph module. Instead of packets being copied
to every connected node, the WLAN maintains a hash table of connected node
pairs. Furthermore, link parameters can be specified for node pairs, in
addition to the on/off connectivity. The parameters are tagged to each packet
and sent to the connected :file:`ng_pipe` module. The :file:`ng_pipe` has been
modified to read any tagged parameters and apply them instead of its default
link effects.
The :file:`ng_wlan` also supports linking together multiple WLANs across different machines using the :file:`ng_ksocket` Netgraph node, for distributed emulation.
The Quagga routing suite is recommended for routing,
:ref:`Quagga_Routing_Software` for installation.
@ -630,12 +433,12 @@ Virtual networks generally require some form of routing in order to work (e.g.
to automatically populate routing tables for routing packets from one subnet
to another.) CORE builds OSPF routing protocol
configurations by default when the blue router
node type is used. The OSPF protocol is available
from the `Quagga open source routing suite <http://www.quagga.net>`_.
node type is used. The OSPF protocol is available
from the `Quagga open source routing suite <http://www.quagga.net>`_.
Other routing protocols are available using different
node services, :ref:`Default_Services_and_Node_Types`.
Quagga is not specified as a dependency for the CORE packages because
Quagga is not specified as a dependency for the CORE packages because
there are two different Quagga packages that you may use:
* `Quagga <http://www.quagga.net>`_ - the standard version of Quagga, suitable for static wired networks, and usually available via your distribution's package manager.
@ -645,7 +448,7 @@ there are two different Quagga packages that you may use:
.. index:: MANET Designated Routers (MDR)
*
*
`OSPF MANET Designated Routers <http://www.nrl.navy.mil/itd/ncs/products/ospf-manet>`_ (MDR) - the Quagga routing suite with a modified version of OSPFv3,
optimized for use with mobile wireless networks. The *mdr* node type (and the MDR service) requires this variant of Quagga.
@ -657,26 +460,19 @@ otherwise install the standard version of Quagga using your package manager or f
Installing Quagga from Packages
-------------------------------
To install the standard version of Quagga from packages, use your package
manager (Linux) or the ports system (FreeBSD).
To install the standard version of Quagga from packages, use your package manager (Linux).
Ubuntu users:
::
sudo apt-get install quagga
Fedora users:
::
yum install quagga
FreeBSD users:
::
pkg_add -r quagga
To install the Quagga variant having OSPFv3 MDR, first download the
To install the Quagga variant having OSPFv3 MDR, first download the
appropriate package, and install using the package manager.
Ubuntu users:
@ -715,7 +511,7 @@ To compile Quagga to work with CORE on Linux:
--localstatedir=/var/run/quagga
make
sudo make install
Note that the configuration directory :file:`/usr/local/etc/quagga` shown for
Quagga above could be :file:`/etc/quagga`, if you create a symbolic link from
@ -729,26 +525,9 @@ If you try to run quagga after installing from source and get an error such as:
error while loading shared libraries libzebra.so.0
this is usually a sign that you have to run `sudo ldconfig` to refresh the
this is usually a sign that you have to run `sudo ldconfig` to refresh the
cache file.
To compile Quagga to work with CORE on FreeBSD:
.. parsed-literal::
tar xzf |QVER|.tar.gz
cd |QVER|
./configure --enable-user=root --enable-group=wheel \\
--sysconfdir=/usr/local/etc/quagga --enable-vtysh \\
--localstatedir=/var/run/quagga
gmake
gmake install
On FreeBSD |BSDVERSION| you can use ``make`` or ``gmake``.
You probably want to compile Quagga from the ports system in
:file:`/usr/ports/net/quagga`.
VCORE
=====

View file

@ -12,8 +12,8 @@ networks. As an emulator, CORE builds a representation of a real computer
network that runs in real time, as opposed to simulation, where abstract models
are used. The live-running emulation can be connected to physical networks and
routers. It provides an environment for running real applications and
protocols, taking advantage of virtualization provided by the Linux or FreeBSD
operating systems.
protocols, taking advantage of virtualization provided by the Linux operating
system.
Some of its key features are:
@ -63,7 +63,6 @@ command-line tools.
The system is modular to allow mixing different components. The virtual
networks component, for example, can be realized with other network
simulators and emulators, such as ns-3 and EMANE.
Different types of kernel virtualization are supported.
Another example is how a session can be designed and started using
the GUI, and continue to run in "headless" operation with the GUI closed.
The CORE API is sockets based,
@ -94,8 +93,7 @@ further control.
How Does it Work?
=================
A CORE node is a lightweight virtual machine. The CORE framework runs on Linux
and FreeBSD systems. The primary platform used for development is Linux.
A CORE node is a lightweight virtual machine. The CORE framework runs on Linux.
.. index::
single: Linux; virtualization
@ -104,8 +102,6 @@ and FreeBSD systems. The primary platform used for development is Linux.
single: network namespaces
* :ref:`Linux` CORE uses Linux network namespace virtualization to build virtual nodes, and ties them together with virtual networks using Linux Ethernet bridging.
* :ref:`FreeBSD` CORE uses jails with a network stack virtualization kernel option to build virtual nodes, and ties them together with virtual networks using BSD's Netgraph system.
.. _Linux:
@ -115,11 +111,10 @@ Linux network namespaces (also known as netns, LXC, or `Linux containers
<http://lxc.sourceforge.net/>`_) is the primary virtualization
technique used by CORE. LXC has been part of the mainline Linux kernel since
2.6.24. Recent Linux distributions such as Fedora and Ubuntu have
namespaces-enabled kernels out of the box, so the kernel does not need to be
patched or recompiled.
A namespace is created using the ``clone()`` system call. Similar
to the BSD jails, each namespace has its own process environment and private
network stack. Network namespaces share the same filesystem in CORE.
namespaces-enabled kernels out of the box.
A namespace is created using the ``clone()`` system call. Each namespace has
its own process environment and private network stack. Network namespaces
share the same filesystem in CORE.
.. index::
single: Linux; bridging
@ -132,56 +127,6 @@ disciplines. Ebtables is Ethernet frame filtering on Linux bridges. Wireless
networks are emulated by controlling which interfaces can send and receive with
ebtables rules.
.. _FreeBSD:
FreeBSD
-------
.. index::
single: FreeBSD; Network stack virtualization
single: FreeBSD; jails
single: FreeBSD; vimages
FreeBSD jails provide an isolated process space, a virtual environment for
running programs. Starting with FreeBSD 8.0, a new `vimage` kernel option
extends BSD jails so that each jail can have its own virtual network stack --
its own networking variables such as addresses, interfaces, routes, counters,
protocol state, socket information, etc. The existing networking algorithms and
code paths are intact but operate on this virtualized state.
Each jail plus network stack forms a lightweight virtual machine. These are
named jails or *virtual images* (or *vimages*) and are created using a the
``jail`` or ``vimage`` command. Unlike traditional virtual
machines, vimages do not feature entire operating systems running on emulated
hardware. All of the vimages will share the same processor, memory, clock, and
other system resources. Because the actual hardware is not emulated and network
packets can be passed by reference through the in-kernel Netgraph system,
vimages are quite lightweight and a single system can accommodate numerous
instances.
Virtual network stacks in FreeBSD were historically available as a patch to the
FreeBSD 4.11 and 7.0 kernels, and the VirtNet project [#f1]_ [#f2]_
added this functionality to the
mainline 8.0-RELEASE and newer kernels.
.. index::
single: FreeBSD; Netgraph
The FreeBSD Operating System kernel features a graph-based
networking subsystem named Netgraph. The netgraph(4) manual page quoted below
best defines this system:
The netgraph system provides a uniform and modular system for the
implementation of kernel objects which perform various networking functions.
The objects, known as nodes, can be arranged into arbitrarily complicated
graphs. Nodes have hooks which are used to connect two nodes together,
forming the edges in the graph. Nodes communicate along the edges to
process data, implement protocols, etc.
The aim of netgraph is to supplement rather than replace the existing
kernel networking infrastructure.
.. index::
single: IMUNES
single: VirtNet
@ -201,7 +146,7 @@ The Tcl/Tk CORE GUI was originally derived from the open source
project from the University of Zagreb
as a custom project within Boeing Research and Technology's Network
Technology research group in 2004. Since then they have developed the CORE
framework to use not only FreeBSD but Linux virtualization, have developed a
framework to use Linux virtualization, have developed a
Python framework, and made numerous user- and kernel-space developments, such
as support for wireless networks, IPsec, the ability to distribute emulations,
simulation integration, and more. The IMUNES project also consists of userspace
@ -226,20 +171,16 @@ CORE has been released by Boeing to the open source community under the BSD
license. If you find CORE useful for your work, please contribute back to the
project. Contributions can be as simple as reporting a bug, dropping a line of
encouragement or technical suggestions to the mailing lists, or can also
include submitting patches or maintaining aspects of the tool. For details on
contributing to CORE, please visit the
`wiki <http://code.google.com/p/coreemu/wiki/Home, wiki>`_.
include submitting patches or maintaining aspects of the tool. For contributing to
CORE, please visit the
`CORE GitHub <https://github.com/coreemu/core>`_.
Besides this manual, there are other additional resources available online:
* `CORE website <http://www.nrl.navy.mil/itd/ncs/products/core>`_ - main project page containing demos, downloads, and mailing list information.
* `CORE supplemental website <http://code.google.com/p/coreemu/>`_ - supplemental Google Code page with a quickstart guide, wiki, bug tracker, and screenshots.
.. index::
single: wiki
single: CORE; wiki
The `CORE wiki <http://code.google.com/p/coreemu/wiki/Home>`_ is a good place to check for the latest documentation and tips.
single: CORE
Goals
-----
@ -255,10 +196,9 @@ Non-Goals
---------
This is a list of Non-Goals, specific things that people may be interested in but are not areas that we will pursue.
#. Reinventing the wheel - Where possible, CORE reuses existing open source components such as virtualization, Netgraph, netem, bridging, Quagga, etc.
#. 1,000,000 nodes - While the goal of CORE is to provide efficient, scalable network emulation, there is no set goal of N number of nodes. There are realistic limits on what a machine can handle as its resources are divided amongst virtual nodes. We will continue to make things more efficient and let the user determine the right number of nodes based on available hardware and the activities each node is performing.
#. Solves every problem - CORE is about emulating networking layers 3-7 using virtual network stacks in the Linux or FreeBSD operating systems.
#. Solves every problem - CORE is about emulating networking layers 3-7 using virtual network stacks in Linux operating systems.
#. Hardware-specific - CORE itself is not an instantiation of hardware, a testbed, or a specific laboratory setup; it should run on commodity laptop and desktop PCs, in addition to high-end server hardware.

View file

@ -22,9 +22,9 @@ netns
The *netns* machine type is the default. This is for nodes that will be
backed by Linux network namespaces. See :ref:`Linux` for a brief explanation of
netns. This default machine type is very lightweight, providing a minimum
netns. This default machine type is very lightweight, providing a minimum
amount of
virtualization in order to emulate a network.
virtualization in order to emulate a network.
Another reason this is designated as the default machine type
is because this virtualization technology
typically requires no changes to the kernel; it is available out-of-the-box
@ -54,7 +54,7 @@ isolated or virtualized environment, but directly on the operating system.
Physical nodes must be assigned to servers, the same way nodes
are assigned to emulation servers with :ref:`Distributed_Emulation`.
The list of available physical nodes currently shares the same dialog box
and list as the emulation servers, accessed using the *Emulation Servers...*
and list as the emulation servers, accessed using the *Emulation Servers...*
entry from the *Session* menu.
.. index:: GRE tunnels with physical nodes
@ -65,27 +65,7 @@ is drawn to indicate network tunneling. A GRE tunneling interface will be
created on the physical node and used to tunnel traffic to and from the
emulated world.
Double-clicking on a physical node during runtime
Double-clicking on a physical node during runtime
opens a terminal with an SSH shell to that
node. Users should configure public-key SSH login as done with emulation
servers.
.. _xen:
xen
===
.. index:: xen machine type
The *xen* machine type is an experimental new type in CORE for managing
Xen domUs from within CORE. After further development,
it may be documented here.
Current limitations include only supporting ISO-based filesystems, and lack
of integration with node services, EMANE, and possibly other features of CORE.
There is a :file:`README-Xen` file available in the CORE source that contains
further instructions for setting up Xen-based nodes.

View file

@ -10,23 +10,23 @@
if WANT_GUI
GUI_MANS = core-gui.1
endif
if WANT_DAEMON
DAEMON_MANS = vnoded.1 vcmd.1 netns.1 core-daemon.1 coresendmsg.1 \
core-cleanup.1 core-xen-cleanup.1 core-manage.1
core-cleanup.1 core-manage.1
endif
man_MANS = $(GUI_MANS) $(DAEMON_MANS)
.PHONY: generate-mans
generate-mans:
$(HELP2MAN) --source CORE 'sh $(top_srcdir)/gui/core-gui' -o core-gui.1.new
$(HELP2MAN) --no-info --source CORE $(top_srcdir)/daemon/src/vnoded -o vnoded.1.new
$(HELP2MAN) --no-info --source CORE $(top_srcdir)/daemon/src/vcmd -o vcmd.1.new
$(HELP2MAN) --no-info --source CORE $(top_srcdir)/daemon/src/netns -o netns.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-daemon -o core-daemon.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/coresendmsg -o coresendmsg.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-cleanup -o core-cleanup.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-xen-cleanup -o core-xen-cleanup.1.new
$(HELP2MAN) --version-string=$(CORE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/sbin/core-manage -o core-manage.1.new
$(HELP2MAN) --no-info --source CORE $(top_srcdir)/netns/vnoded -o vnoded.1.new
$(HELP2MAN) --no-info --source CORE $(top_srcdir)/netns/vcmd -o vcmd.1.new
$(HELP2MAN) --no-info --source CORE $(top_srcdir)/netns/netns -o netns.1.new
$(HELP2MAN) --version-string=$(PACKAGE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/scripts/core-daemon -o core-daemon.1.new
$(HELP2MAN) --version-string=$(PACKAGE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/scripts/coresendmsg -o coresendmsg.1.new
$(HELP2MAN) --version-string=$(PACKAGE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/scripts/core-cleanup -o core-cleanup.1.new
$(HELP2MAN) --version-string=$(PACKAGE_VERSION) --no-info --source CORE $(top_srcdir)/daemon/scripts/core-manage -o core-manage.1.new
.PHONY: diff
diff:
@ -34,5 +34,9 @@ diff:
colordiff -u $$m $$m.new | less -R; \
done;
clean-local:
-rm -f $(addsuffix .new,$(GUI_MANS))
-rm -f $(addsuffix .new,$(DAEMON_MANS))
DISTCLEANFILES = Makefile.in
EXTRA_DIST = $(man_MANS)

View file

@ -20,7 +20,6 @@ remove the core-daemon.log file
.BR core-gui(1),
.BR core-daemon(1),
.BR coresendmsg(1),
.BR core-xen-cleanup(1),
.BR vcmd(1),
.BR vnoded(1)
.SH BUGS

View file

@ -47,7 +47,7 @@ ns-3 Scripting
Currently, ns-3 is supported by writing
:ref:`Python scripts <Python_Scripting>`, but not through
drag-and-drop actions within the GUI.
If you have a copy of the CORE source, look under :file:`core/daemon/ns3/examples/` for example scripts; a CORE installation package puts these under
If you have a copy of the CORE source, look under :file:`ns3/examples/` for example scripts; a CORE installation package puts these under
:file:`/usr/share/core/examples/corens3`.
To run these scripts, install CORE so the CORE Python libraries are accessible,
@ -168,8 +168,8 @@ a constant-rate 802.11a-based ad hoc network, using a lot of ns-3 defaults.
However, programs may be written with a blend of ns-3 API and CORE Python
API calls. This section examines some of the fundamental objects in
the CORE ns-3 support. Source code can be found in
:file:`daemon/ns3/corens3/obj.py` and example
code in :file:`daemon/ns3/corens3/examples/`.
:file:`ns3/corens3/obj.py` and example
code in :file:`ns3/corens3/examples/`.
Ns3Session
----------

View file

@ -19,7 +19,7 @@ The top question about the performance of CORE is often
* Hardware - the number and speed of processors in the computer, the available
processor cache, RAM memory, and front-side bus speed may greatly affect
overall performance.
* Operating system version - Linux or FreeBSD, and the specific kernel versions
* Operating system version - distribution of Linux and the specific kernel versions
used will affect overall performance.
* Active processes - all nodes share the same CPU resources, so if one or more
nodes is performing a CPU-intensive task, overall performance will suffer.
@ -28,8 +28,8 @@ The top question about the performance of CORE is often
* GUI usage - widgets that run periodically, mobility scenarios, and other GUI
interactions generally consume CPU cycles that may be needed for emulation.
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running FreeBSD
|BSDVERSION|, we have found it reasonable to run 30-75 nodes running
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux,
we have found it reasonable to run 30-75 nodes running
OSPFv2 and OSPFv3 routing. On this hardware CORE can instantiate 100 or more
nodes, but at that point it becomes critical as to what each of the nodes is
doing.
@ -38,7 +38,7 @@ doing.
Because this software is primarily a network emulator, the more appropriate
question is *how much network traffic can it handle?* On the same 3.0GHz server
described above, running FreeBSD 4.11, about 300,000 packets-per-second can be
described above, running Linux, about 300,000 packets-per-second can be
pushed through the system. The number of hops and the size of the packets is
less important. The limiting factor is the number of times that the operating
system needs to handle a packet. The 300,000 pps figure represents the number
@ -52,9 +52,9 @@ throughput seen on the full length of the network path.
For a more detailed study of performance in CORE, refer to the following publications:
* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time network emulator, Proceedings of IEEE MILCOM Conference, 2008.
* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time network emulator, Proceedings of IEEE MILCOM Conference, 2008.

View file

@ -47,7 +47,7 @@ Here are the basic elements of a CORE Python script:
node1.newnetif(hub1, ["10.0.0.1/24"])
node2.newnetif(hub1, ["10.0.0.2/24"])
node1.icmd(["ping", "-c", "5", "10.0.0.2"])
node1.vnodeclient.icmd(["ping", "-c", "5", "10.0.0.2"])
session.shutdown()
@ -65,18 +65,6 @@ interactive Python shell, you can retrieve online help about the various
classes and methods; for example *help(nodes.CoreNode)* or
*help(Session)*.
An interactive development environment (IDE) is available for browsing
the CORE source, the
`Eric Python IDE <http://eric-ide.python-projects.org/index.html>`_.
CORE has a project file that can be opened by Eric, in the source under
:file:`core/daemon/CORE.e4p`.
This IDE
has a class browser for viewing a tree of classes and methods. It features
syntax highlighting, auto-completion, indenting, and more. One feature that
is helpful with learning the CORE Python modules is the ability to generate
class diagrams; right-click on a class, choose *Diagrams*, and
*Class Diagram*.
.. index:: daemon versus script
.. index:: script versus daemon
.. index:: script with GUI support

View file

@ -11,9 +11,9 @@ Using the CORE GUI
.. index:: how to use CORE
CORE can be used via the GUI or :ref:`Python_Scripting`.
A typical emulation workflow is outlined in :ref:`emulation-workflow`.
Often the GUI is used to draw nodes and network devices on the canvas.
CORE can be used via the GUI or :ref:`Python_Scripting`.
A typical emulation workflow is outlined in :ref:`emulation-workflow`.
Often the GUI is used to draw nodes and network devices on the canvas.
A Python script could also be written, that imports the CORE Python module, to configure and instantiate nodes and networks. This chapter primarily covers usage of the CORE GUI.
.. _emulation-workflow:
@ -24,7 +24,7 @@ A Python script could also be written, that imports the CORE Python module, to c
Emulation Workflow
CORE can be customized to perform any action at each phase depicted in :ref:`emulation-workflow`. See the *Hooks...* entry on the
CORE can be customized to perform any action at each phase depicted in :ref:`emulation-workflow`. See the *Hooks...* entry on the
:ref:`Session_Menu`
for details about when these session states are reached.
@ -43,13 +43,13 @@ mode. Nodes are drawn on a blank canvas using the toolbar on the left and
configured from right-click menus or by double-clicking them. The GUI does not
need to be run as root.
Once editing is complete, pressing the green `Start` button (or choosing `Execute` from the `Session` menu) instantiates the topology within the FreeBSD kernel and enters Execute mode. In execute mode, the user can interact with the running emulated machines by double-clicking or right-clicking on them. The editing toolbar disappears and is replaced by an execute toolbar, which provides tools while running the emulation. Pressing the red `Stop` button (or choosing `Terminate` from the `Session` menu) will destroy the running emulation and return CORE to Edit mode.
Once editing is complete, pressing the green `Start` button (or choosing `Execute` from the `Session` menu) instantiates the topology within the Linux kernel and enters Execute mode. In execute mode, the user can interact with the running emulated machines by double-clicking or right-clicking on them. The editing toolbar disappears and is replaced by an execute toolbar, which provides tools while running the emulation. Pressing the red `Stop` button (or choosing `Terminate` from the `Session` menu) will destroy the running emulation and return CORE to Edit mode.
CORE can be started directly in Execute mode by specifying ``--start`` and a topology file on the command line:
::
core-gui --start ~/.core/configs/myfile.imn
Once the emulation is running, the GUI can be closed, and a prompt will appear asking if the emulation should be terminated. The emulation may be left running and the GUI can reconnect to an existing session at a later time.
@ -62,8 +62,8 @@ There is also a **Batch** mode where CORE runs without the GUI and will instanti
::
core-gui --batch ~/.core/configs/myfile.imn
A session running in batch mode can be accessed using the ``vcmd`` command (or ``vimage`` on FreeBSD), or the GUI can connect to the session.
A session running in batch mode can be accessed using the ``vcmd`` command, or the GUI can connect to the session.
.. index:: closebatch
@ -76,12 +76,12 @@ The session number is printed in the terminal when batch mode is started. This s
If you forget the session number, you can always start the CORE GUI and use :ref:`Session_Menu` CORE sessions dialog box.
.. NOTE::
It is quite easy to have overlapping sessions when running in batch mode. This may become a problem when control networks are employed in these sessions as there could be addressing conflicts. See :ref:`Control_Network` for remedies.
It is quite easy to have overlapping sessions when running in batch mode. This may become a problem when control networks are employed in these sessions as there could be addressing conflicts. See :ref:`Control_Network` for remedies.
.. NOTE::
If you like to use batch mode, consider writing a
CORE :ref:`Python script <Python_Scripting>` directly.
CORE :ref:`Python script <Python_Scripting>` directly.
This enables access to the full power of the Python API.
The :ref:`File_Menu` has a basic `Export Python Script` option for getting
started with a GUI-designed topology.
@ -92,8 +92,7 @@ The session number is printed in the terminal when batch mode is started. This s
.. index:: root privileges
The GUI can be run as a normal user on Linux. For FreeBSD, the GUI should be run
as root in order to start an emulation.
The GUI can be run as a normal user on Linux.
.. index:: port number
@ -204,7 +203,7 @@ sub-menus, which appear when you click on their group icon.
wireless nodes based on the distance between them
* |rj45| *RJ45* - with the RJ45 Physical Interface Tool, emulated nodes can
be linked to real physical interfaces on the Linux or FreeBSD machine;
be linked to real physical interfaces;
using this tool, real networks and devices can be physically connected to
the live-running emulation (:ref:`RJ45_Tool`)
@ -330,7 +329,7 @@ File Menu
The File menu contains options for manipulating the :file:`.imn`
:ref:`Configuration_Files`. Generally, these menu items should not be used in
Execute mode (:ref:`Modes_of_Operation`.)
Execute mode (:ref:`Modes_of_Operation`.)
.. index:: New
@ -340,7 +339,7 @@ Execute mode (:ref:`Modes_of_Operation`.)
* *Open* - invokes the File Open dialog box for selecting a new :file:`.imn`
or XML file to open. You can change the default path used for this dialog
in the :ref:`Preferences` Dialog.
in the :ref:`Preferences` Dialog.
.. index:: Save
@ -349,16 +348,16 @@ Execute mode (:ref:`Modes_of_Operation`.)
.. index:: Save As XML
* *Save As XML* - invokes the Save As dialog box for selecting a new
* *Save As XML* - invokes the Save As dialog box for selecting a new
:file:`.xml` file for saving the current configuration in the XML file.
See :ref:`Configuration_Files`.
See :ref:`Configuration_Files`.
.. index:: Save As imn
* *Save As imn* - invokes the Save As dialog box for selecting a new
:file:`.imn`
topology file for saving the current configuration. Files are saved in the
*IMUNES network configuration* file format described in
*IMUNES network configuration* file format described in
:ref:`Configuration_Files`.
.. index:: Export Python script
@ -376,7 +375,7 @@ Execute mode (:ref:`Modes_of_Operation`.)
.. index:: Execute Python script with options
* *Execute Python script with options* - invokes a File Open dialog box for selecting a
Python script to run and automatically connect to. After a selection is made,
Python script to run and automatically connect to. After a selection is made,
a Python Script Options dialog box is invoked to allow for command-line options to be added.
The Python script must create a new CORE Session and add this session to the daemon's list of sessions
in order for this to work; see :ref:`Python_Scripting`.
@ -386,7 +385,7 @@ Execute mode (:ref:`Modes_of_Operation`.)
* *Open current file in editor* - this opens the current topology file in the
``vim`` text editor. First you need to save the file. Once the file has been
edited with a text editor, you will need to reload the file to see your
changes. The text editor can be changed from the :ref:`Preferences` Dialog.
changes. The text editor can be changed from the :ref:`Preferences` Dialog.
.. index:: Print
.. index:: printing
@ -434,7 +433,7 @@ Edit Menu
* *Cut*, *Copy*, *Paste* - used to cut, copy, and paste a selection. When nodes
are pasted, their node numbers are automatically incremented, and existing
links are preserved with new IP addresses assigned. Services and their
customizations are copied to the new node, but care should be taken as
customizations are copied to the new node, but care should be taken as
node IP addresses have changed with possibly old addresses remaining in any
custom service configurations. Annotations may also be copied and pasted.
@ -503,7 +502,7 @@ The canvas menu provides commands for adding, removing, changing, and switching
altitude reference point used to convert between geographic and Cartesian
coordinate systems. By clicking the *Save as default* option, all new
canvases will be created with these properties. The default canvas size can
also be changed in the :ref:`Preferences` dialog box.
also be changed in the :ref:`Preferences` dialog box.
* *Wallpaper...* - used for setting the canvas background image,
:ref:`Customizing_your_Topology's_Look`.
@ -538,12 +537,12 @@ canvas.
.. index:: hide nodes
* *Show hidden nodes* - reveal nodes that have been hidden. Nodes are hidden by
selecting one or more nodes, right-clicking one and choosing *hide*.
selecting one or more nodes, right-clicking one and choosing *hide*.
.. index:: locked view
* *Locked* - toggles locked view; when the view is locked, nodes cannot be
moved around on the canvas with the mouse. This could be useful when
moved around on the canvas with the mouse. This could be useful when
sharing the topology with someone and you do not expect them to change
things.
@ -585,7 +584,7 @@ The tools menu lists different utility functions.
.. index:: autorearrange selected
* *Autorearrange selected* - automatically arranges the selected nodes on the
canvas.
canvas.
.. index:: align to grid
@ -710,7 +709,7 @@ Here are some standard widgets:
routing protocols. A line is drawn from each router halfway to the router ID
of an adjacent router. The color of the line is based on the OSPF adjacency
state such as Two-way or Full. To learn about the different colors, see the
*Configure Adjacency...* menu item. The :file:`vtysh` command is used to
*Configure Adjacency...* menu item. The :file:`vtysh` command is used to
dump OSPF neighbor information.
Only half of the line is drawn because each
router may be in a different adjacency state with respect to the other.
@ -724,11 +723,7 @@ Here are some standard widgets:
link. If the throughput exceeds a certain threshold, the link will become
highlighted. For wireless nodes which broadcast data to all nodes in range,
the throughput rate is displayed next to the node and the node will become
circled if the threshold is exceeded. *Note: under FreeBSD, the
Throughput Widget will
display "0.0 kbps" on all links that have no configured link effects, because
of the way link statistics are counted; to fix this, add a small delay or a
bandwidth limit to each link.*
circled if the threshold is exceeded.
.. _Observer_Widgets:
@ -810,7 +805,7 @@ and options.
of configured hooks, and buttons on the bottom left allow adding, editing,
and removing hook scripts. The new or edit button will open a hook script
editing window. A hook script is a shell script invoked on the host (not
within a virtual node).
within a virtual node).
The script is started at the session state specified in the drop down:
@ -818,14 +813,14 @@ and options.
* *configuration* - when the user presses the *Start* button, node, link, and
other configuration data is sent to the backend. This state is also
reached when the user customizes a service.
reached when the user customizes a service.
* *instantiation* - after
configuration data has been sent, just before the nodes are created.
configuration data has been sent, just before the nodes are created.
* *runtime* - all nodes and networks have been
built and are running. (This is the same state at which
the previously-named *global experiment script* was run.)
built and are running. (This is the same state at which
the previously-named *global experiment script* was run.)
* *datacollect* - the user has pressed the
*Stop* button, but before services have been stopped and nodes have been
@ -837,18 +832,18 @@ and options.
* *Reset node positions* - if you have moved nodes around
using the mouse or by using a mobility module, choosing this item will reset
all nodes to their original position on the canvas. The node locations are
remembered when you first press the Start button.
remembered when you first press the Start button.
* *Emulation servers...* - invokes the CORE emulation
servers dialog for configuring :ref:`Distributed_Emulation`.
* *Change Sessions...* - invokes the Sessions dialog for switching between
* *Change Sessions...* - invokes the Sessions dialog for switching between
different
running sessions. This dialog is presented during startup when one or
more sessions are already running.
* *Options...* - presents per-session options, such as the IPv4 prefix to be
used, if any, for a control network
used, if any, for a control network
(see :ref:`Communicating_with_the_Host_Machine`); the ability to preserve
the session directory; and an on/off switch for SDT3D support.
@ -871,7 +866,7 @@ Connecting with Physical Networks
CORE's emulated networks run in real time, so they can be connected to live
physical networks. The RJ45 tool and the Tunnel tool help with connecting to
the real world. These tools are available from the *Link-layer nodes* menu.
the real world. These tools are available from the *Link-layer nodes* menu.
When connecting two or more CORE emulations together, MAC address collisions
should be avoided. CORE automatically assigns MAC addresses to interfaces when
@ -893,7 +888,7 @@ with the CORE nodes in real time.
The main drawback is that one physical interface is required for each
connection. When the physical interface is assigned to CORE, it may not be used
for anything else. Another consideration is that the computer or network that
you are connecting to must be co-located with the CORE machine.
you are connecting to must be co-located with the CORE machine.
To place an RJ45 connection, click on the *Link-layer nodes* toolbar and select
the *RJ45 Tool* from the submenu. Click on the canvas near the node you want to
@ -904,8 +899,8 @@ physical interface. A list of available interfaces will be shown, and one may
be selected by double-clicking its name in the list, or an interface name may
be entered into the text box.
.. NOTE::
When you press the Start button to instantiate your topology, the
.. NOTE::
When you press the Start button to instantiate your topology, the
interface assigned to the RJ45 will be connected to the CORE topology. The
interface can no longer be used by the system. For example, if there was an
IP address assigned to the physical interface before execution, the address
@ -925,7 +920,7 @@ physical ports are available, but the (e.g. switching) hardware connected to
the physical port must support the VLAN tagging, and the available bandwidth
will be shared.
You need to create separate VLAN virtual devices on the Linux or FreeBSD host,
You need to create separate VLAN virtual devices on the Linux host,
and then assign these devices to RJ45 nodes inside of CORE. The VLANning is
actually performed outside of CORE, so when the CORE emulated node receives a
packet, the VLAN tag will already be removed.
@ -953,15 +948,15 @@ Tunneling can be helpful when the number of physical interfaces is limited or
when the peer is located on a different network. Also a physical interface does
not need to be dedicated to CORE as with the RJ45 tool.
The peer GRE tunnel endpoint may be another CORE machine or a (Linux, FreeBSD,
etc.) host that supports GRE tunneling. When placing a Tunnel node, initially
The peer GRE tunnel endpoint may be another CORE machine or another
host that supports GRE tunneling. When placing a Tunnel node, initially
the node will display "UNASSIGNED". This text should be replaced with the IP
address of the tunnel peer. This is the IP address of the other CORE machine or
physical machine, not an IP address of another virtual node.
.. NOTE::
Be aware of possible MTU issues with GRE devices. The *gretap* device
has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
bridge's MTU
becomes 1,458 bytes. The Linux bridge will not perform fragmentation for
large packets if other bridge ports have a higher MTU such as 1,500 bytes.
@ -977,7 +972,7 @@ used.
.. index:: ip link command
Here are example commands for building the other end of a tunnel on a Linux
machine. In this example, a router in CORE has the virtual address
machine. In this example, a router in CORE has the virtual address
``10.0.0.1/24`` and the CORE host machine has the (real) address
``198.51.100.34/24``. The Linux box
that will connect with the CORE machine is reachable over the (real) network
@ -989,7 +984,7 @@ an address from the subnet of the virtual router node,
``10.0.0.2/24``.
::
# these commands are run on the tunnel peer
sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1
sudo ip addr add 10.0.0.2/24 dev gt0
@ -1053,7 +1048,7 @@ the node, and SSH with X11 forwarding can be used from the host to the node:
ssh -X 172.16.0.5 xclock
Note that the :file:`coresendmsg` utility can be used for a node to send
messages to the CORE daemon running on the host (if the ``listenaddr = 0.0.0.0``
messages to the CORE daemon running on the host (if the ``listenaddr = 0.0.0.0``
is set in the :file:`/etc/core/core.conf` file) to interact with the running
emulation. For example, a node may move itself or other nodes, or change
its icon based on some node state.
@ -1108,7 +1103,7 @@ Wired Networks
Wired networks are created using the *Link Tool* to draw a link between two
nodes. This automatically draws a red line representing an Ethernet link and
creates new interfaces on network-layer nodes.
creates new interfaces on network-layer nodes.
.. index:: link configuration
@ -1124,12 +1119,11 @@ link, affecting its display.
.. index:: lanswitch
Link-layer nodes are provided for modeling wired networks. These do not create
a separate network stack when instantiated, but are implemented using bridging
(Linux) or Netgraph nodes (FreeBSD). These are the hub, switch, and wireless
LAN nodes. The hub copies each packet from the incoming link to every connected
link, while the switch behaves more like an Ethernet switch and keeps track of
the Ethernet address of the connected peer, forwarding unicast traffic only to
the appropriate ports.
a separate network stack when instantiated, but are implemented using Linux bridging.
These are the hub, switch, and wireless LAN nodes. The hub copies each packet from
the incoming link to every connected link, while the switch behaves more like an
Ethernet switch and keeps track of the Ethernet address of the connected peer,
forwarding unicast traffic only to the appropriate ports.
The wireless LAN (WLAN) is covered in the next section.
@ -1158,13 +1152,13 @@ on platform. See the table below for a brief overview of wireless model types.
============= ===================== ======== ==================================================================
Model Type Supported Platform(s) Fidelity Description
============= ===================== ======== ==================================================================
Basic on/off Linux, FreeBSD Low Linux Ethernet bridging with ebtables (Linux) or ng_wlan (FreeBSD)
Basic on/off Linux Low Linux Ethernet bridging with ebtables
EMANE Plug-in Linux High TAP device connected to EMANE emulator with pluggable MAC and PHY radio types
============= ===================== ======== ==================================================================
To quickly build a wireless network, you can first place several router nodes
onto the canvas. If you have the
onto the canvas. If you have the
:ref:`Quagga MDR software <Quagga_Routing_Software>` installed, it is
recommended that you use the *mdr* node type for reduced routing overhead. Next
choose the *wireless LAN* from the *Link-layer nodes* submenu. First set the
@ -1198,8 +1192,6 @@ dragging them, and wireless links will be dynamically made or broken.
The *EMANE* tab lists available EMANE models to use for wireless networking.
See the :ref:`EMANE` chapter for details on using EMANE.
On FreeBSD, the WLAN node is realized using the *ng_wlan* Netgraph node.
.. _Mobility_Scripting:
Mobility Scripting
@ -1213,7 +1205,7 @@ Mobility Scripting
.. index:: mobility scripting
CORE has a few ways to script mobility.
CORE has a few ways to script mobility.
* ns-2 script - the script specifies either absolute positions
or waypoints with a velocity. Locations are given with Cartesian coordinates.
@ -1226,7 +1218,7 @@ CORE has a few ways to script mobility.
For the first method, you can create a mobility script using a text
editor, or using a tool such as `BonnMotion <http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/>`_, and associate the script with one of the wireless
using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
button, and set the *mobility script file* field in the resulting *ns2script*
configuration dialog.
@ -1254,11 +1246,11 @@ The format of an ns-2 mobility script looks like:
$node_(2) set Y_ 240.0
$node_(2) set Z_ 0.00
$ns_ at 1.00 "$node_(2) setdest 130.0 280.0 15.0"
The first three lines set an initial position for node 2. The last line in the
above example causes node 2 to move towards the destination `(130, 280)` at
speed `15`. All units are screen coordinates, with speed in units per second.
speed `15`. All units are screen coordinates, with speed in units per second.
The
total script time is learned after all nodes have reached their waypoints.
Initially, the time slider in the mobility script dialog will not be
@ -1305,13 +1297,12 @@ Distributed Emulation
A large emulation scenario can be deployed on multiple emulation servers and
controlled by a single GUI. The GUI, representing the entire topology, can be
run on one of the emulation servers or on a separate machine. Emulations can be
distributed on Linux, while tunneling support has not been added yet for
FreeBSD.
distributed on Linux.
Each machine that will act as an emulation server needs to have CORE installed.
It is not important to have the GUI component but the CORE Python daemon
:file:`core-daemon` needs to be installed. Set the ``listenaddr`` line in the
:file:`/etc/core/core.conf` configuration file so that the CORE Python
:file:`/etc/core/core.conf` configuration file so that the CORE Python
daemon will respond to commands from other servers:
::
@ -1320,7 +1311,7 @@ daemon will respond to commands from other servers:
pidfile = /var/run/core-daemon.pid
logfile = /var/log/core-daemon.log
listenaddr = 0.0.0.0
The ``listenaddr`` should be set to the address of the interface that should
receive CORE API control commands from the other servers; setting ``listenaddr
@ -1356,19 +1347,19 @@ Servers are configured by choosing *Emulation servers...* from the *Session*
menu. Servers parameters are configured in the list below and stored in a
*servers.conf* file for use in different scenarios. The IP address and port of
the server must be specified. The name of each server will be saved in the
topology file as each node's location.
topology file as each node's location.
.. NOTE::
The server that the GUI connects with
is referred to as the master server.
is referred to as the master server.
The user needs to assign nodes to emulation servers in the scenario. Making no
assignment means the node will be emulated on the master server
In the configuration window of every node, a drop-down box located between
the *Node name* and the *Image* button will select the name of the emulation
server. By default, this menu shows *(none)*, indicating that the node will
be emulated locally on the master. When entering Execute mode, the CORE GUI
assignment means the node will be emulated on the master server
In the configuration window of every node, a drop-down box located between
the *Node name* and the *Image* button will select the name of the emulation
server. By default, this menu shows *(none)*, indicating that the node will
be emulated locally on the master. When entering Execute mode, the CORE GUI
will deploy the node on its assigned emulation server.
Another way to assign emulation servers is to select one or more nodes using
@ -1395,7 +1386,7 @@ If there is a link between two nodes residing on different servers, the GUI
will draw the link with a dashed line, and automatically create necessary
tunnels between the nodes when executed. Care should be taken to arrange the
topology such that the number of tunnels is minimized. The tunnels carry data
between servers to connect nodes as specified in the topology.
between servers to connect nodes as specified in the topology.
These tunnels are created using GRE tunneling, similar to the
:ref:`Tunnel_Tool`.
@ -1445,7 +1436,6 @@ Default Services and Node Types
Here are the default node types and their services:
.. index:: Xen
.. index:: physical nodes
* *router* - zebra, OSFPv2, OSPFv3, and IPForward services for IGP
@ -1459,10 +1449,6 @@ Here are the default node types and their services:
* *prouter* - a physical router, having the same default services as the
*router* node type; for incorporating Linux testbed machines into an
emulation, the :ref:`Machine_Types` is set to :ref:`physical`.
* *xen* - a Xen-based router, having the same default services as the
*router* node type; for incorporating Xen domUs into an emulation, the
:ref:`Machine_Types` is set to :ref:`xen`, and different *profiles* are
available.
Configuration files can be automatically generated by each service. For
example, CORE automatically generates routing protocol configuration for the
@ -1561,7 +1547,7 @@ service. Generally they send a kill signal to the running process using the
*kill* or *killall* commands. If the service does not terminate
the running processes using a shutdown command, the processes will be killed
when the *vnoded* daemon is terminated (with *kill -9*) and
the namespace destroyed. It is a good practice to
the namespace destroyed. It is a good practice to
specify shutdown commands, which will allow for proper process termination, and
for run-time control of stopping and restarting services.
@ -1606,7 +1592,7 @@ in the :file:`/etc/core/core.conf` configuration file. A sample is provided in
the :file:`myservices/` directory.
.. NOTE::
The directory name used in `custom_services_dir` should be unique and
The directory name used in `custom_services_dir` should be unique and
should not correspond to
any existing Python module name. For example, don't use the name `subprocess`
or `services`.
@ -1641,7 +1627,7 @@ create a bridge or namespace, or the failure to launch EMANE processes for an
EMANE-based network.
Clicking on an exception displays details for that
exception. If a node number is specified, that node is highlighted on the
exception. If a node number is specified, that node is highlighted on the
canvas when the exception is selected. The exception source is a text string
to help trace where the exception occurred; "service:UserDefined" for example,
would appear for a failed validation command with the UserDefined service.
@ -1654,7 +1640,7 @@ list and for viewing the CORE daemon and node log files.
.. index:: CEL batch mode
.. NOTE::
In batch mode, exceptions received from the CORE daemon are displayed on
In batch mode, exceptions received from the CORE daemon are displayed on
the console.
.. _Configuration_Files:
@ -1668,16 +1654,16 @@ Configuration Files
Configurations are saved to :file:`xml` or :file:`.imn` topology files using
the *File* menu. You
can easily edit these files with a text editor.
can easily edit these files with a text editor.
Any time you edit the topology
file, you will need to stop the emulation if it were running and reload the
file.
The :file:`.xml` `file schema is specified by NRL <http://www.nrl.navy.mil/itd/ncs/products/mnmtools>`_ and there are two versions to date:
version 0.0 and version 1.0,
with 1.0 as the current default. CORE can open either XML version. However, the
xmlfilever line in :file:`/etc/core/core.conf` controls the version of the XML file
that CORE will create.
The :file:`.xml` `file schema is specified by NRL <http://www.nrl.navy.mil/itd/ncs/products/mnmtools>`_ and there are two versions to date:
version 0.0 and version 1.0,
with 1.0 as the current default. CORE can open either XML version. However, the
xmlfilever line in :file:`/etc/core/core.conf` controls the version of the XML file
that CORE will create.
.. index:: Scenario Plan XML
@ -1685,7 +1671,7 @@ In version 1.0, the XML file is also referred to as the Scenario Plan. The Scena
made up of the following:
* `Network Plan` - describes nodes, hosts, interfaces, and the networks to
* `Network Plan` - describes nodes, hosts, interfaces, and the networks to
which they belong.
* `Motion Plan` - describes position and motion patterns for nodes in an
emulation.
@ -1694,7 +1680,7 @@ made up of the following:
* `Visualization Plan` - meta-data that is not part of the NRL XML schema but
used only by CORE. For example, GUI options, canvas and annotation info, etc.
are contained here.
* `Test Bed Mappings` - describes mappings of nodes, interfaces and EMANE modules in the scenario to
* `Test Bed Mappings` - describes mappings of nodes, interfaces and EMANE modules in the scenario to
test bed hardware.
CORE includes Test Bed Mappings in XML files that are saved while the scenario is running.
@ -1710,7 +1696,7 @@ indentation is one tab character.
.. tip::
There are several topology examples included with CORE in
the :file:`configs/` directory.
This directory can be found in :file:`~/.core/configs`, or
This directory can be found in :file:`~/.core/configs`, or
installed to the filesystem
under :file:`/usr[/local]/share/examples/configs`.