initial pass at removing bsd and code related to using bsd nodes
This commit is contained in:
parent
4858151d7c
commit
bc1e3e70c9
62 changed files with 720 additions and 18008 deletions
|
@ -4,15 +4,13 @@
|
|||
|
||||
.. |CENTOSVERSION| replace:: 6.x or 7.x
|
||||
|
||||
.. |BSDVERSION| replace:: 9.0
|
||||
|
||||
.. |CORERPM| replace:: 1.fc20.x86_64.rpm
|
||||
.. |CORERPM2| replace:: 1.fc20.noarch.rpm
|
||||
.. |COREDEB| replace:: 0ubuntu1_precise_amd64.deb
|
||||
.. |COREDEB2| replace:: 0ubuntu1_precise_all.deb
|
||||
|
||||
.. |QVER| replace:: quagga-0.99.21mr2.2
|
||||
.. |QVERDEB| replace:: quagga-mr_0.99.21mr2.2_amd64.deb
|
||||
.. |QVERDEB| replace:: quagga-mr_0.99.21mr2.2_amd64.deb
|
||||
.. |QVERRPM| replace:: quagga-0.99.21mr2.2-1.fc16.x86_64.rpm
|
||||
|
||||
.. |APTDEPS| replace:: bash bridge-utils ebtables iproute libev-dev python
|
||||
|
@ -20,6 +18,6 @@
|
|||
.. |APTDEPS3| replace:: autoconf automake gcc libev-dev make python-dev libreadline-dev pkg-config imagemagick help2man
|
||||
|
||||
.. |YUMDEPS| replace:: bash bridge-utils ebtables iproute libev python procps-ng net-tools
|
||||
.. |YUMDEPS2| replace:: tcl tk tkimg
|
||||
.. |YUMDEPS2| replace:: tcl tk tkimg
|
||||
.. |YUMDEPS3| replace:: autoconf automake make libev-devel python-devel ImageMagick help2man
|
||||
|
||||
|
|
165
doc/devguide.rst
165
doc/devguide.rst
|
@ -39,10 +39,6 @@ These are being actively developed as of CORE |version|:
|
|||
* *doc* - Documentation for the manual lives here in reStructuredText format.
|
||||
* *packaging* - Control files and script for building CORE packages are here.
|
||||
|
||||
These directories are not so actively developed:
|
||||
|
||||
* *kernel* - patches and modules mostly related to FreeBSD.
|
||||
|
||||
.. _The_CORE_API:
|
||||
|
||||
The CORE API
|
||||
|
@ -59,8 +55,7 @@ The GUI communicates with the CORE daemon using the API. One emulation server
|
|||
communicates with another using the API. The API also allows other systems to
|
||||
interact with the CORE emulation. The API allows another system to add, remove,
|
||||
or modify nodes and links, and enables executing commands on the emulated
|
||||
systems. On FreeBSD, the API is used for enhancing the wireless LAN
|
||||
calculations. Wireless link parameters are updated on-the-fly based on node
|
||||
systems. Wireless link parameters are updated on-the-fly based on node
|
||||
positions.
|
||||
|
||||
CORE listens on a local TCP port for API messages. The other system could be
|
||||
|
@ -88,7 +83,7 @@ The *vnoded* daemon is the program used to create a new namespace, and
|
|||
listen on a control channel for commands that may instantiate other processes.
|
||||
This daemon runs as PID 1 in the container. It is launched automatically by
|
||||
the CORE daemon. The control channel is a UNIX domain socket usually named
|
||||
:file:`/tmp/pycore.23098/n3`, for node 3 running on CORE
|
||||
:file:`/tmp/pycore.23098/n3`, for node 3 running on CORE
|
||||
session 23098, for example. Root privileges are required for creating a new
|
||||
namespace.
|
||||
|
||||
|
@ -107,13 +102,13 @@ using a command such as:
|
|||
::
|
||||
|
||||
gnome-terminal -e vcmd -c /tmp/pycore.50160/n1 -- bash
|
||||
|
||||
|
||||
|
||||
Similarly, the IPv4 routes Observer Widget will run a command to display the routing table using a command such as:
|
||||
::
|
||||
|
||||
vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro
|
||||
|
||||
|
||||
|
||||
.. index:: core-cleanup
|
||||
|
||||
|
@ -139,7 +134,7 @@ network namespace emulation.
|
|||
tc qdisc show
|
||||
# view the rules that make the wireless LAN work
|
||||
ebtables -L
|
||||
|
||||
|
||||
|
||||
Below is a transcript of creating two emulated nodes and connecting them together with a wired link:
|
||||
|
||||
|
@ -179,156 +174,8 @@ Below is a transcript of creating two emulated nodes and connecting them togethe
|
|||
# display connectivity and ping from node 1 to node 2
|
||||
brctl show
|
||||
vcmd -c /tmp/n1.ctl -- ping 10.0.0.2
|
||||
|
||||
|
||||
|
||||
The above example script can be found as :file:`twonodes.sh` in the
|
||||
:file:`examples/netns` directory. Use *core-cleanup* to clean up after the
|
||||
script.
|
||||
|
||||
.. _FreeBSD_Commands:
|
||||
|
||||
FreeBSD Commands
|
||||
================
|
||||
|
||||
|
||||
.. index:: vimage
|
||||
.. index:: ngctl
|
||||
.. index:: Netgraph
|
||||
.. _FreeBSD_Kernel_Commands:
|
||||
|
||||
FreeBSD Kernel Commands
|
||||
-----------------------
|
||||
|
||||
The FreeBSD kernel emulation controlled by CORE is realized through several
|
||||
userspace commands. The CORE GUI itself could be thought of as a glorified
|
||||
script that dispatches these commands to build and manage the kernel emulation.
|
||||
|
||||
|
||||
* **vimage** - the vimage command, short for "virtual image", is used to
|
||||
create lightweight virtual machines and execute commands within the virtual
|
||||
image context. On a FreeBSD CORE machine, see the *vimage(8)* man page for
|
||||
complete details. The vimage command comes from the VirtNet project which
|
||||
virtualizes the FreeBSD network stack.
|
||||
|
||||
|
||||
* **ngctl** - the ngctl command, short for "netgraph control", creates
|
||||
Netgraph nodes and hooks, connects them together, and allows for various
|
||||
interactions with the Netgraph nodes. See the *ngctl(8)* man page for
|
||||
complete details. The ngctl command is built-in to FreeBSD because the
|
||||
Netgraph system is part of the kernel.
|
||||
|
||||
Both commands must be run as root.
|
||||
Some example usage of the *vimage* command follows below.
|
||||
::
|
||||
|
||||
vimage # displays the current virtual image
|
||||
vimage -l # lists running virtual images
|
||||
vimage e0_n0 ps aux # list the processes running on node 0
|
||||
for i in 1 2 3 4 5
|
||||
do # execute a command on all nodes
|
||||
vimage e0_n$i sysctl -w net.inet.ip.redirect=0
|
||||
done
|
||||
|
||||
|
||||
The *ngctl* command is more complex, due to the variety of Netgraph nodes
|
||||
available and each of their options.
|
||||
::
|
||||
|
||||
ngctl l # list active Netgraph nodes
|
||||
ngctl show e0_n8: # display node hook information
|
||||
ngctl msg e0_n0-n1: getstats # get pkt count statistics from a pipe node
|
||||
ngctl shutdown \\[0x0da3\\]: # shut down unnamed node using hex node ID
|
||||
|
||||
|
||||
There are many other combinations of commands not shown here. See the online
|
||||
manual (man) pages for complete details.
|
||||
|
||||
Below is a transcript of creating two emulated nodes, `router0` and `router1`,
|
||||
and connecting them together with a link:
|
||||
|
||||
.. index:: create nodes from command-line
|
||||
|
||||
.. index:: command-line
|
||||
|
||||
::
|
||||
|
||||
# create node 0
|
||||
vimage -c e0_n0
|
||||
vimage e0_n0 hostname router0
|
||||
ngctl mkpeer eiface ether ether
|
||||
vimage -i e0_n0 ngeth0 eth0
|
||||
vimage e0_n0 ifconfig eth0 link 40:00:aa:aa:00:00
|
||||
vimage e0_n0 ifconfig lo0 inet localhost
|
||||
vimage e0_n0 sysctl net.inet.ip.forwarding=1
|
||||
vimage e0_n0 sysctl net.inet6.ip6.forwarding=1
|
||||
vimage e0_n0 ifconfig eth0 mtu 1500
|
||||
|
||||
# create node 1
|
||||
vimage -c e0_n1
|
||||
vimage e0_n1 hostname router1
|
||||
ngctl mkpeer eiface ether ether
|
||||
vimage -i e0_n1 ngeth1 eth0
|
||||
vimage e0_n1 ifconfig eth0 link 40:00:aa:aa:0:1
|
||||
vimage e0_n1 ifconfig lo0 inet localhost
|
||||
vimage e0_n1 sysctl net.inet.ip.forwarding=1
|
||||
vimage e0_n1 sysctl net.inet6.ip6.forwarding=1
|
||||
vimage e0_n1 ifconfig eth0 mtu 1500
|
||||
|
||||
# create a link between n0 and n1
|
||||
ngctl mkpeer eth0@e0_n0: pipe ether upper
|
||||
ngctl name eth0@e0_n0:ether e0_n0-n1
|
||||
ngctl connect e0_n0-n1: eth0@e0_n1: lower ether
|
||||
ngctl msg e0_n0-n1: setcfg \\
|
||||
{{ bandwidth=100000000 delay=0 upstream={ BER=0 dupl
|
||||
icate=0 } downstream={ BER=0 duplicate=0 } }}
|
||||
ngctl msg e0_n0-n1: setcfg {{ downstream={ fifo=1 } }}
|
||||
ngctl msg e0_n0-n1: setcfg {{ downstream={ droptail=1 } }}
|
||||
ngctl msg e0_n0-n1: setcfg {{ downstream={ queuelen=50 } }}
|
||||
ngctl msg e0_n0-n1: setcfg {{ upstream={ fifo=1 } }}
|
||||
ngctl msg e0_n0-n1: setcfg {{ upstream={ droptail=1 } }}
|
||||
ngctl msg e0_n0-n1: setcfg {{ upstream={ queuelen=50 } }}
|
||||
|
||||
|
||||
Other FreeBSD commands that may be of interest:
|
||||
.. index:: FreeBSD commands
|
||||
|
||||
* **kldstat**, **kldload**, **kldunload** - list, load, and unload
|
||||
FreeBSD kernel modules
|
||||
* **sysctl** - display and modify various pieces of kernel state
|
||||
* **pkg_info**, **pkg_add**, **pkg_delete** - list, add, or remove
|
||||
FreeBSD software packages.
|
||||
* **vtysh** - start a Quagga CLI for router configuration
|
||||
|
||||
Netgraph Nodes
|
||||
--------------
|
||||
|
||||
.. index:: Netgraph
|
||||
|
||||
.. index:: Netgraph nodes
|
||||
|
||||
Each Netgraph node implements a protocol or processes data in some well-defined
|
||||
manner (see the `netgraph(4)` man page). The netgraph source code is located
|
||||
in `/usr/src/sys/netgraph`. There you might discover additional nodes that
|
||||
implement some desired functionality, that have not yet been included in CORE.
|
||||
Using certain kernel commands, you can likely include these types of nodes into
|
||||
your CORE emulation.
|
||||
|
||||
The following Netgraph nodes are used by CORE:
|
||||
|
||||
* **ng_bridge** - switch node performs Ethernet bridging
|
||||
|
||||
* **ng_cisco** - Cisco HDLC serial links
|
||||
|
||||
* **ng_eiface** - virtual Ethernet interface that is assigned to each virtual machine
|
||||
|
||||
* **ng_ether** - physical Ethernet devices, used by the RJ45 tool
|
||||
|
||||
* **ng_hub** - hub node
|
||||
|
||||
* **ng_pipe** - used for wired Ethernet links, imposes packet delay, bandwidth restrictions, and other link characteristics
|
||||
|
||||
* **ng_socket** - socket used by *ngctl* utility
|
||||
|
||||
* **ng_wlan** - wireless LAN node
|
||||
|
||||
|
||||
|
|
282
doc/install.rst
282
doc/install.rst
|
@ -9,14 +9,14 @@
|
|||
Installation
|
||||
************
|
||||
|
||||
This chapter describes how to set up a CORE machine. Note that the easiest
|
||||
This chapter describes how to set up a CORE machine. Note that the easiest
|
||||
way to install CORE is using a binary
|
||||
package on Ubuntu or Fedora (deb or rpm) using the distribution's package
|
||||
manager
|
||||
to automatically install dependencies, see :ref:`Installing_from_Packages`.
|
||||
|
||||
Ubuntu and Fedora Linux are the recommended distributions for running CORE. Ubuntu |UBUNTUVERSION| and Fedora |FEDORAVERSION| ship with kernels with support for namespaces built-in. They support the latest hardware. However,
|
||||
these distributions are not strictly required. CORE will likely work on other
|
||||
these distributions are not strictly required. CORE will likely work on other
|
||||
flavors of Linux, see :ref:`Installing_from_Source`.
|
||||
|
||||
The primary dependencies are Tcl/Tk (8.5 or newer) for the GUI, and Python 2.6 or 2.7 for the CORE daemon.
|
||||
|
@ -50,7 +50,7 @@ Prerequisites
|
|||
|
||||
.. index:: Prerequisites
|
||||
|
||||
The Linux or FreeBSD operating system is required. The GUI uses the Tcl/Tk scripting toolkit, and the CORE daemon require Python. Details of the individual software packages required can be found in the installation steps.
|
||||
A Linux operating system is required. The GUI uses the Tcl/Tk scripting toolkit, and the CORE daemon requires Python. Details of the individual software packages required can be found in the installation steps.
|
||||
|
||||
.. _Required_Hardware:
|
||||
|
||||
|
@ -61,7 +61,7 @@ Required Hardware
|
|||
|
||||
.. index:: System requirements
|
||||
|
||||
Any computer capable of running Linux or FreeBSD should be able to run CORE. Since the physical machine will be hosting numerous virtual machines, as a general rule you should select a machine having as much RAM and CPU resources as possible.
|
||||
Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous virtual machines, as a general rule you should select a machine having as much RAM and CPU resources as possible.
|
||||
|
||||
A *general recommendation* would be:
|
||||
|
||||
|
@ -70,9 +70,9 @@ A *general recommendation* would be:
|
|||
* about 3 MB of free disk space (plus more for dependency packages such as Tcl/Tk)
|
||||
* X11 for the GUI, or remote X11 over SSH
|
||||
|
||||
The computer can be a laptop, desktop, or rack-mount server. A keyboard, mouse,
|
||||
The computer can be a laptop, desktop, or rack-mount server. A keyboard, mouse,
|
||||
and monitor are not required if a network connection is available
|
||||
for remotely accessing the machine. A 3D accelerated graphics card
|
||||
for remotely accessing the machine. A 3D accelerated graphics card
|
||||
is not required.
|
||||
|
||||
.. _Required_Software:
|
||||
|
@ -80,18 +80,13 @@ is not required.
|
|||
Required Software
|
||||
-----------------
|
||||
|
||||
CORE requires the Linux or FreeBSD operating systems because it uses virtualization provided by the kernel. It does not run on the Windows or Mac OS X operating systems (unless it is running within a virtual machine guest.) There are two
|
||||
different virtualization technologies that CORE can currently use:
|
||||
Linux network namespaces and FreeBSD jails,
|
||||
CORE requires a Linux operating systems because it uses virtualization provided by the kernel. It does not run on the Windows or Mac OS X operating systems (unless it is running within a virtual machine guest.)
|
||||
The virtualization technology that CORE currently uses:
|
||||
Linux network namespaces,
|
||||
see :ref:`How_Does_it_Work?` for virtualization details.
|
||||
|
||||
**Linux network namespaces is the recommended platform.** Development is focused here and it supports the latest features. It is the easiest to install because there is no need to patch, install, and run a special Linux kernel.
|
||||
|
||||
FreeBSD |BSDVERSION|-RELEASE may offer the best scalability. If your
|
||||
applications run under FreeBSD and you are comfortable with that platform,
|
||||
this may be a good choice. Device and application support by BSD
|
||||
may not be as extensive as Linux.
|
||||
|
||||
The CORE GUI requires the X.Org X Window system (X11), or can run over a
|
||||
remote X11 session. For specific Tcl/Tk, Python, and other libraries required
|
||||
to run CORE, refer to the :ref:`Installation` section.
|
||||
|
@ -113,7 +108,7 @@ Installing from Packages
|
|||
|
||||
The easiest way to install CORE is using the pre-built packages. The package
|
||||
managers on Ubuntu or Fedora will
|
||||
automatically install dependencies for you.
|
||||
automatically install dependencies for you.
|
||||
You can obtain the CORE packages from the `CORE downloads <http://downloads.pf.itd.nrl.navy.mil/core/packages/>`_ page
|
||||
or `CORE GitHub <https://github.com/coreemu/core/releases>`_.
|
||||
|
||||
|
@ -143,7 +138,7 @@ First install the Ubuntu |UBUNTUVERSION| operating system.
|
|||
to select which Quagga package to use.
|
||||
|
||||
|
||||
* **Optional:** install the prerequisite packages (otherwise skip this
|
||||
* **Optional:** install the prerequisite packages (otherwise skip this
|
||||
step and have the package manager install them for you.)
|
||||
|
||||
.. parsed-literal::
|
||||
|
@ -152,13 +147,13 @@ First install the Ubuntu |UBUNTUVERSION| operating system.
|
|||
# update-manager instead of apt-get update/dist-upgrade
|
||||
sudo apt-get update
|
||||
sudo apt-get dist-upgrade
|
||||
sudo apt-get install |APTDEPS| |APTDEPS2|
|
||||
|
||||
sudo apt-get install |APTDEPS| |APTDEPS2|
|
||||
|
||||
* Install Quagga for routing. If you plan on working with wireless
|
||||
networks, we recommend
|
||||
networks, we recommend
|
||||
installing
|
||||
`OSPF MDR <http://www.nrl.navy.mil/itd/ncs/products/ospf-manet>`__
|
||||
(replace `amd64` below with `i386` if needed
|
||||
(replace `amd64` below with `i386` if needed
|
||||
to match your architecture):
|
||||
|
||||
.. parsed-literal::
|
||||
|
@ -172,7 +167,7 @@ First install the Ubuntu |UBUNTUVERSION| operating system.
|
|||
::
|
||||
|
||||
sudo apt-get install quagga
|
||||
|
||||
|
||||
* Install the CORE deb packages for Ubuntu, using a GUI that automatically
|
||||
resolves dependencies (note that the absolute path to the deb file
|
||||
must be used with ``software-center``):
|
||||
|
@ -181,24 +176,24 @@ First install the Ubuntu |UBUNTUVERSION| operating system.
|
|||
|
||||
software-center /home/user/Downloads/core-daemon\_\ |version|-|COREDEB|
|
||||
software-center /home/user/Downloads/core-gui\_\ |version|-|COREDEB2|
|
||||
|
||||
|
||||
or install from command-line:
|
||||
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
sudo dpkg -i core-daemon\_\ |version|-|COREDEB|
|
||||
sudo dpkg -i core-gui\_\ |version|-|COREDEB2|
|
||||
|
||||
|
||||
* Start the CORE daemon as root.
|
||||
::
|
||||
|
||||
sudo /etc/init.d/core-daemon start
|
||||
|
||||
|
||||
* Run the CORE GUI as a normal user:
|
||||
::
|
||||
|
||||
core-gui
|
||||
|
||||
|
||||
|
||||
After running the ``core-gui`` command, a GUI should appear with a canvas
|
||||
for drawing topologies. Messages will print out on the console about
|
||||
|
@ -217,7 +212,7 @@ examples below, replace with `i686` is using a 32-bit architecture. Also,
|
|||
Fedora release number.
|
||||
|
||||
* **CentOS only:** in order to install the `libev` and `tkimg` prerequisite
|
||||
packages, you
|
||||
packages, you
|
||||
first need to install the `EPEL <http://fedoraproject.org/wiki/EPEL>`_ repo
|
||||
(Extra Packages for Enterprise Linux):
|
||||
|
||||
|
@ -229,7 +224,7 @@ Fedora release number.
|
|||
|
||||
* **CentOS 7.x only:** as of this writing, the `tkimg` prerequisite package
|
||||
is missing from EPEL 7.x, but the EPEL 6.x package can be manually installed
|
||||
from
|
||||
from
|
||||
`here <http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/tkimg.html>`_
|
||||
|
||||
::
|
||||
|
@ -249,7 +244,7 @@ Fedora release number.
|
|||
yum install |YUMDEPS| |YUMDEPS2|
|
||||
|
||||
|
||||
* **Optional (Fedora 17+):** Fedora 17 and newer have an additional
|
||||
* **Optional (Fedora 17+):** Fedora 17 and newer have an additional
|
||||
prerequisite providing the required netem kernel modules (otherwise
|
||||
skip this step and have the package manager install it for you.)
|
||||
|
||||
|
@ -272,7 +267,7 @@ Fedora release number.
|
|||
::
|
||||
|
||||
yum install quagga
|
||||
|
||||
|
||||
|
||||
* Install the CORE RPM packages for Fedora and automatically resolve
|
||||
dependencies:
|
||||
|
@ -281,14 +276,14 @@ Fedora release number.
|
|||
|
||||
yum localinstall python-core_|service|-|version|-|CORERPM| --nogpgcheck
|
||||
yum localinstall core-gui-|version|-|CORERPM2| --nogpgcheck
|
||||
|
||||
|
||||
or install from the command-line:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
rpm -ivh python-core_|service|-|version|-|CORERPM|
|
||||
rpm -ivh core-gui-|version|-|CORERPM2|
|
||||
|
||||
|
||||
|
||||
* Turn off SELINUX by setting ``SELINUX=disabled`` in the :file:`/etc/sysconfig/selinux` file, and adding ``selinux=0`` to the kernel line in
|
||||
your :file:`/etc/grub.conf` file; on Fedora 15 and newer, disable sandboxd using ``chkconfig sandbox off``;
|
||||
|
@ -304,12 +299,12 @@ Fedora release number.
|
|||
systemctl start core-daemon.service
|
||||
# or for CentOS:
|
||||
/etc/init.d/core-daemon start
|
||||
|
||||
|
||||
* Run the CORE GUI as a normal user:
|
||||
::
|
||||
|
||||
core-gui
|
||||
|
||||
|
||||
|
||||
After running the ``core-gui`` command, a GUI should appear with a canvas
|
||||
for drawing topologies. Messages will print out on the console about
|
||||
|
@ -335,11 +330,11 @@ These packages are not required for normal binary package installs.
|
|||
sudo apt-get install |APTDEPS| \\
|
||||
|APTDEPS2| \\
|
||||
|APTDEPS3|
|
||||
|
||||
|
||||
|
||||
You can obtain the CORE source from the `CORE source <http://downloads.pf.itd.nrl.navy.mil/core/source/>`_ page. Choose either a stable release version or
|
||||
the development snapshot available in the `nightly_snapshots` directory.
|
||||
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
|
||||
the development snapshot available in the `nightly_snapshots` directory.
|
||||
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
|
||||
builds on multi-core systems.
|
||||
|
||||
.. parsed-literal::
|
||||
|
@ -350,9 +345,9 @@ builds on multi-core systems.
|
|||
./configure
|
||||
make -j8
|
||||
sudo make install
|
||||
|
||||
|
||||
The CORE Manual documentation is built separately from the :file:`doc/`
|
||||
|
||||
The CORE Manual documentation is built separately from the :file:`doc/`
|
||||
sub-directory in the source. It requires Sphinx:
|
||||
|
||||
.. parsed-literal::
|
||||
|
@ -376,16 +371,16 @@ These packages are not required for normal binary package installs.
|
|||
yum install |YUMDEPS| \\
|
||||
|YUMDEPS2| \\
|
||||
|YUMDEPS3|
|
||||
|
||||
|
||||
|
||||
.. NOTE::
|
||||
For a minimal X11 installation, also try these packages::
|
||||
|
||||
|
||||
yum install xauth xterm urw-fonts
|
||||
|
||||
You can obtain the CORE source from the `CORE source <http://downloads.pf.itd.nrl.navy.mil/core/source/>`_ page. Choose either a stable release version or
|
||||
the development snapshot available in the :file:`nightly_snapshots` directory.
|
||||
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
|
||||
The ``-j8`` argument to ``make`` will run eight simultaneous jobs, to speed up
|
||||
builds on multi-core systems. Notice the ``configure`` flag to tell the build
|
||||
system that a systemd service file should be installed under Fedora.
|
||||
|
||||
|
@ -397,18 +392,12 @@ system that a systemd service file should be installed under Fedora.
|
|||
./configure --with-startup=systemd
|
||||
make -j8
|
||||
sudo make install
|
||||
|
||||
|
||||
Note that the Linux RPM and Debian packages do not use the ``/usr/local``
|
||||
prefix, and files are instead installed to ``/usr/sbin``, and
|
||||
``/usr/lib``. This difference is a result of aligning with the directory
|
||||
structure of Linux packaging systems and FreeBSD ports packaging.
|
||||
|
||||
Another note is that the Python distutils in Fedora Linux will install the CORE
|
||||
Python modules to :file:`/usr/lib/python2.7/site-packages/core`, instead of
|
||||
using the :file:`dist-packages` directory.
|
||||
|
||||
The CORE Manual documentation is built separately from the :file:`doc/`
|
||||
The CORE Manual documentation is built separately from the :file:`doc/`
|
||||
sub-directory in the source. It requires Sphinx:
|
||||
|
||||
.. parsed-literal::
|
||||
|
@ -438,7 +427,7 @@ CentOS/EL6 does not use the systemd service file, so the `configure` option
|
|||
`--with-startup=systemd` should be omitted:
|
||||
|
||||
::
|
||||
|
||||
|
||||
./configure
|
||||
|
||||
|
||||
|
@ -448,12 +437,12 @@ CentOS/EL6 does not use the systemd service file, so the `configure` option
|
|||
Installing from Source on SUSE
|
||||
------------------------------
|
||||
|
||||
To build CORE from source on SUSE or OpenSUSE,
|
||||
To build CORE from source on SUSE or OpenSUSE,
|
||||
use the similar instructions shown in :ref:`Installing_from_Source_on_Fedora`,
|
||||
except that the following `configure` option should be used:
|
||||
|
||||
::
|
||||
|
||||
|
||||
./configure --with-startup=suse
|
||||
|
||||
This causes a separate init script to be installed that is tailored towards SUSE systems.
|
||||
|
@ -463,153 +452,6 @@ The `zypper` command is used instead of `yum`.
|
|||
For OpenSUSE/Xen based installations, refer to the `README-Xen` file included
|
||||
in the CORE source.
|
||||
|
||||
|
||||
.. _Installing_from_Source_on_FreeBSD:
|
||||
|
||||
Installing from Source on FreeBSD
|
||||
---------------------------------
|
||||
|
||||
.. index:: kernel patch
|
||||
|
||||
**Rebuilding the FreeBSD Kernel**
|
||||
|
||||
|
||||
The FreeBSD kernel requires a small patch to allow per-node directories in the
|
||||
filesystem. Also, the `VIMAGE` build option needs to be turned on to enable
|
||||
jail-based network stack virtualization. The source code for the FreeBSD
|
||||
kernel is located in :file:`/usr/src/sys`.
|
||||
|
||||
Instructions below will use the :file:`/usr/src/sys/amd64` architecture
|
||||
directory, but the directory :file:`/usr/src/sys/i386` should be substituted
|
||||
if you are using a 32-bit architecture.
|
||||
|
||||
The kernel patch is available from the CORE source tarball under core-|version|/kernel/symlinks-8.1-RELEASE.diff. This patch applies to the
|
||||
FreeBSD 8.x or 9.x kernels.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
cd /usr/src/sys
|
||||
# first you can check if the patch applies cleanly using the '-C' option
|
||||
patch -p1 -C < ~/core-|version|/kernel/symlinks-8.1-RELEASE.diff
|
||||
# without '-C' applies the patch
|
||||
patch -p1 < ~/core-|version|/kernel/symlinks-8.1-RELEASE.diff
|
||||
|
||||
|
||||
A kernel configuration file named :file:`CORE` can be found within the source tarball: core-|version|/kernel/freebsd8-config-CORE. The config is valid for
|
||||
FreeBSD 8.x or 9.x kernels.
|
||||
|
||||
The contents of this configuration file are shown below; you can edit it to suit your needs.
|
||||
|
||||
::
|
||||
|
||||
# this is the FreeBSD 9.x kernel configuration file for CORE
|
||||
include GENERIC
|
||||
ident CORE
|
||||
|
||||
options VIMAGE
|
||||
nooptions SCTP
|
||||
options IPSEC
|
||||
device crypto
|
||||
|
||||
options IPFIREWALL
|
||||
options IPFIREWALL_DEFAULT_TO_ACCEPT
|
||||
|
||||
|
||||
The kernel configuration file can be linked or copied to the kernel source directory. Use it to configure and build the kernel:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
cd /usr/src/sys/amd64/conf
|
||||
cp ~/core-|version|/kernel/freebsd8-config-CORE CORE
|
||||
config CORE
|
||||
cd ../compile/CORE
|
||||
make cleandepend && make depend
|
||||
make -j8 && make install
|
||||
|
||||
|
||||
Change the number 8 above to match the number of CPU cores you have times two.
|
||||
Note that the ``make install`` step will move your existing kernel to
|
||||
``/boot/kernel.old`` and removes that directory if it already exists. Reboot to
|
||||
enable this new patched kernel.
|
||||
|
||||
**Building CORE from Source on FreeBSD**
|
||||
|
||||
Here are the prerequisite packages from the FreeBSD ports system:
|
||||
|
||||
::
|
||||
|
||||
pkg_add -r tk85
|
||||
pkg_add -r libimg
|
||||
pkg_add -r bash
|
||||
pkg_add -r libev
|
||||
pkg_add -r sudo
|
||||
pkg_add -r python
|
||||
pkg_add -r autotools
|
||||
pkg_add -r gmake
|
||||
|
||||
|
||||
Note that if you are installing to a bare FreeBSD system and want to SSH with X11 forwarding to that system, these packages will help:
|
||||
|
||||
::
|
||||
|
||||
pkg_add -r xauth
|
||||
pkg_add -r xorg-fonts
|
||||
|
||||
|
||||
The ``sudo`` package needs to be configured so a normal user can run the CORE
|
||||
GUI using the command ``core-gui`` (opening a shell window on a node uses a
|
||||
command such as ``sudo vimage n1``.)
|
||||
|
||||
On FreeBSD, the CORE source is built using autotools and gmake:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
tar xzf core-|version|.tar.gz
|
||||
cd core-|version|
|
||||
./bootstrap.sh
|
||||
./configure
|
||||
gmake -j8
|
||||
sudo gmake install
|
||||
|
||||
|
||||
Build and install the ``vimage`` utility for controlling virtual images. The source can be obtained from `FreeBSD SVN <http://svn.freebsd.org/viewvc/base/head/tools/tools/vimage/>`_, or it is included with the CORE source for convenience:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
cd core-|version|/kernel/vimage
|
||||
make
|
||||
make install
|
||||
|
||||
|
||||
.. index:: FreeBSD; kernel modules
|
||||
|
||||
.. index:: kernel modules
|
||||
|
||||
.. index:: ng_wlan and ng_pipe
|
||||
|
||||
On FreeBSD you should also install the CORE kernel modules for wireless emulation. Perform this step after you have recompiled and installed FreeBSD kernel.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
cd core-|version|/kernel/ng_pipe
|
||||
make
|
||||
sudo make install
|
||||
cd ../ng_wlan
|
||||
make
|
||||
sudo make install
|
||||
|
||||
|
||||
The :file:`ng_wlan` kernel module allows for the creation of WLAN nodes. This
|
||||
is a modified :file:`ng_hub` Netgraph module. Instead of packets being copied
|
||||
to every connected node, the WLAN maintains a hash table of connected node
|
||||
pairs. Furthermore, link parameters can be specified for node pairs, in
|
||||
addition to the on/off connectivity. The parameters are tagged to each packet
|
||||
and sent to the connected :file:`ng_pipe` module. The :file:`ng_pipe` has been
|
||||
modified to read any tagged parameters and apply them instead of its default
|
||||
link effects.
|
||||
|
||||
The :file:`ng_wlan` also supports linking together multiple WLANs across different machines using the :file:`ng_ksocket` Netgraph node, for distributed emulation.
|
||||
|
||||
The Quagga routing suite is recommended for routing,
|
||||
:ref:`Quagga_Routing_Software` for installation.
|
||||
|
||||
|
@ -624,12 +466,12 @@ Virtual networks generally require some form of routing in order to work (e.g.
|
|||
to automatically populate routing tables for routing packets from one subnet
|
||||
to another.) CORE builds OSPF routing protocol
|
||||
configurations by default when the blue router
|
||||
node type is used. The OSPF protocol is available
|
||||
from the `Quagga open source routing suite <http://www.quagga.net>`_.
|
||||
node type is used. The OSPF protocol is available
|
||||
from the `Quagga open source routing suite <http://www.quagga.net>`_.
|
||||
Other routing protocols are available using different
|
||||
node services, :ref:`Default_Services_and_Node_Types`.
|
||||
|
||||
Quagga is not specified as a dependency for the CORE packages because
|
||||
Quagga is not specified as a dependency for the CORE packages because
|
||||
there are two different Quagga packages that you may use:
|
||||
|
||||
* `Quagga <http://www.quagga.net>`_ - the standard version of Quagga, suitable for static wired networks, and usually available via your distribution's package manager.
|
||||
|
@ -639,7 +481,7 @@ there are two different Quagga packages that you may use:
|
|||
|
||||
.. index:: MANET Designated Routers (MDR)
|
||||
|
||||
*
|
||||
*
|
||||
`OSPF MANET Designated Routers <http://www.nrl.navy.mil/itd/ncs/products/ospf-manet>`_ (MDR) - the Quagga routing suite with a modified version of OSPFv3,
|
||||
optimized for use with mobile wireless networks. The *mdr* node type (and the MDR service) requires this variant of Quagga.
|
||||
|
||||
|
@ -651,26 +493,19 @@ otherwise install the standard version of Quagga using your package manager or f
|
|||
Installing Quagga from Packages
|
||||
-------------------------------
|
||||
|
||||
To install the standard version of Quagga from packages, use your package
|
||||
manager (Linux) or the ports system (FreeBSD).
|
||||
To install the standard version of Quagga from packages, use your package manager (Linux).
|
||||
|
||||
Ubuntu users:
|
||||
::
|
||||
|
||||
sudo apt-get install quagga
|
||||
|
||||
|
||||
Fedora users:
|
||||
::
|
||||
|
||||
yum install quagga
|
||||
|
||||
FreeBSD users:
|
||||
::
|
||||
|
||||
pkg_add -r quagga
|
||||
|
||||
|
||||
To install the Quagga variant having OSPFv3 MDR, first download the
|
||||
To install the Quagga variant having OSPFv3 MDR, first download the
|
||||
appropriate package, and install using the package manager.
|
||||
|
||||
Ubuntu users:
|
||||
|
@ -709,7 +544,7 @@ To compile Quagga to work with CORE on Linux:
|
|||
--localstatedir=/var/run/quagga
|
||||
make
|
||||
sudo make install
|
||||
|
||||
|
||||
|
||||
Note that the configuration directory :file:`/usr/local/etc/quagga` shown for
|
||||
Quagga above could be :file:`/etc/quagga`, if you create a symbolic link from
|
||||
|
@ -723,26 +558,9 @@ If you try to run quagga after installing from source and get an error such as:
|
|||
|
||||
error while loading shared libraries libzebra.so.0
|
||||
|
||||
this is usually a sign that you have to run `sudo ldconfig` to refresh the
|
||||
this is usually a sign that you have to run `sudo ldconfig` to refresh the
|
||||
cache file.
|
||||
|
||||
To compile Quagga to work with CORE on FreeBSD:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
tar xzf |QVER|.tar.gz
|
||||
cd |QVER|
|
||||
./configure --enable-user=root --enable-group=wheel \\
|
||||
--sysconfdir=/usr/local/etc/quagga --enable-vtysh \\
|
||||
--localstatedir=/var/run/quagga
|
||||
gmake
|
||||
gmake install
|
||||
|
||||
|
||||
On FreeBSD |BSDVERSION| you can use ``make`` or ``gmake``.
|
||||
You probably want to compile Quagga from the ports system in
|
||||
:file:`/usr/ports/net/quagga`.
|
||||
|
||||
VCORE
|
||||
=====
|
||||
|
||||
|
|
|
@ -12,8 +12,8 @@ networks. As an emulator, CORE builds a representation of a real computer
|
|||
network that runs in real time, as opposed to simulation, where abstract models
|
||||
are used. The live-running emulation can be connected to physical networks and
|
||||
routers. It provides an environment for running real applications and
|
||||
protocols, taking advantage of virtualization provided by the Linux or FreeBSD
|
||||
operating systems.
|
||||
protocols, taking advantage of virtualization provided by the Linux operating
|
||||
system.
|
||||
|
||||
Some of its key features are:
|
||||
|
||||
|
@ -94,8 +94,7 @@ further control.
|
|||
How Does it Work?
|
||||
=================
|
||||
|
||||
A CORE node is a lightweight virtual machine. The CORE framework runs on Linux
|
||||
and FreeBSD systems. The primary platform used for development is Linux.
|
||||
A CORE node is a lightweight virtual machine. The CORE framework runs on Linux.
|
||||
|
||||
.. index::
|
||||
single: Linux; virtualization
|
||||
|
@ -104,8 +103,6 @@ and FreeBSD systems. The primary platform used for development is Linux.
|
|||
single: network namespaces
|
||||
|
||||
* :ref:`Linux` CORE uses Linux network namespace virtualization to build virtual nodes, and ties them together with virtual networks using Linux Ethernet bridging.
|
||||
* :ref:`FreeBSD` CORE uses jails with a network stack virtualization kernel option to build virtual nodes, and ties them together with virtual networks using BSD's Netgraph system.
|
||||
|
||||
|
||||
.. _Linux:
|
||||
|
||||
|
@ -117,9 +114,9 @@ technique used by CORE. LXC has been part of the mainline Linux kernel since
|
|||
2.6.24. Recent Linux distributions such as Fedora and Ubuntu have
|
||||
namespaces-enabled kernels out of the box, so the kernel does not need to be
|
||||
patched or recompiled.
|
||||
A namespace is created using the ``clone()`` system call. Similar
|
||||
to the BSD jails, each namespace has its own process environment and private
|
||||
network stack. Network namespaces share the same filesystem in CORE.
|
||||
A namespace is created using the ``clone()`` system call. Each namespace has
|
||||
its own process environment and private network stack. Network namespaces
|
||||
share the same filesystem in CORE.
|
||||
|
||||
.. index::
|
||||
single: Linux; bridging
|
||||
|
@ -132,56 +129,6 @@ disciplines. Ebtables is Ethernet frame filtering on Linux bridges. Wireless
|
|||
networks are emulated by controlling which interfaces can send and receive with
|
||||
ebtables rules.
|
||||
|
||||
|
||||
.. _FreeBSD:
|
||||
|
||||
FreeBSD
|
||||
-------
|
||||
|
||||
.. index::
|
||||
single: FreeBSD; Network stack virtualization
|
||||
single: FreeBSD; jails
|
||||
single: FreeBSD; vimages
|
||||
|
||||
FreeBSD jails provide an isolated process space, a virtual environment for
|
||||
running programs. Starting with FreeBSD 8.0, a new `vimage` kernel option
|
||||
extends BSD jails so that each jail can have its own virtual network stack --
|
||||
its own networking variables such as addresses, interfaces, routes, counters,
|
||||
protocol state, socket information, etc. The existing networking algorithms and
|
||||
code paths are intact but operate on this virtualized state.
|
||||
|
||||
Each jail plus network stack forms a lightweight virtual machine. These are
|
||||
named jails or *virtual images* (or *vimages*) and are created using a the
|
||||
``jail`` or ``vimage`` command. Unlike traditional virtual
|
||||
machines, vimages do not feature entire operating systems running on emulated
|
||||
hardware. All of the vimages will share the same processor, memory, clock, and
|
||||
other system resources. Because the actual hardware is not emulated and network
|
||||
packets can be passed by reference through the in-kernel Netgraph system,
|
||||
vimages are quite lightweight and a single system can accommodate numerous
|
||||
instances.
|
||||
|
||||
Virtual network stacks in FreeBSD were historically available as a patch to the
|
||||
FreeBSD 4.11 and 7.0 kernels, and the VirtNet project [#f1]_ [#f2]_
|
||||
added this functionality to the
|
||||
mainline 8.0-RELEASE and newer kernels.
|
||||
|
||||
.. index::
|
||||
single: FreeBSD; Netgraph
|
||||
|
||||
The FreeBSD Operating System kernel features a graph-based
|
||||
networking subsystem named Netgraph. The netgraph(4) manual page quoted below
|
||||
best defines this system:
|
||||
|
||||
The netgraph system provides a uniform and modular system for the
|
||||
implementation of kernel objects which perform various networking functions.
|
||||
The objects, known as nodes, can be arranged into arbitrarily complicated
|
||||
graphs. Nodes have hooks which are used to connect two nodes together,
|
||||
forming the edges in the graph. Nodes communicate along the edges to
|
||||
process data, implement protocols, etc.
|
||||
|
||||
The aim of netgraph is to supplement rather than replace the existing
|
||||
kernel networking infrastructure.
|
||||
|
||||
.. index::
|
||||
single: IMUNES
|
||||
single: VirtNet
|
||||
|
@ -201,7 +148,7 @@ The Tcl/Tk CORE GUI was originally derived from the open source
|
|||
project from the University of Zagreb
|
||||
as a custom project within Boeing Research and Technology's Network
|
||||
Technology research group in 2004. Since then they have developed the CORE
|
||||
framework to use not only FreeBSD but Linux virtualization, have developed a
|
||||
framework to use Linux virtualization, have developed a
|
||||
Python framework, and made numerous user- and kernel-space developments, such
|
||||
as support for wireless networks, IPsec, the ability to distribute emulations,
|
||||
simulation integration, and more. The IMUNES project also consists of userspace
|
||||
|
@ -226,20 +173,16 @@ CORE has been released by Boeing to the open source community under the BSD
|
|||
license. If you find CORE useful for your work, please contribute back to the
|
||||
project. Contributions can be as simple as reporting a bug, dropping a line of
|
||||
encouragement or technical suggestions to the mailing lists, or can also
|
||||
include submitting patches or maintaining aspects of the tool. For details on
|
||||
contributing to CORE, please visit the
|
||||
`wiki <http://code.google.com/p/coreemu/wiki/Home, wiki>`_.
|
||||
include submitting patches or maintaining aspects of the tool. For contributing to
|
||||
CORE, please visit the
|
||||
`CORE GitHub <https://github.com/coreemu/core>`_.
|
||||
|
||||
Besides this manual, there are other additional resources available online:
|
||||
|
||||
* `CORE website <http://www.nrl.navy.mil/itd/ncs/products/core>`_ - main project page containing demos, downloads, and mailing list information.
|
||||
* `CORE supplemental website <http://code.google.com/p/coreemu/>`_ - supplemental Google Code page with a quickstart guide, wiki, bug tracker, and screenshots.
|
||||
|
||||
.. index::
|
||||
single: wiki
|
||||
single: CORE; wiki
|
||||
|
||||
The `CORE wiki <http://code.google.com/p/coreemu/wiki/Home>`_ is a good place to check for the latest documentation and tips.
|
||||
single: CORE
|
||||
|
||||
Goals
|
||||
-----
|
||||
|
@ -255,10 +198,9 @@ Non-Goals
|
|||
---------
|
||||
This is a list of Non-Goals, specific things that people may be interested in but are not areas that we will pursue.
|
||||
|
||||
|
||||
#. Reinventing the wheel - Where possible, CORE reuses existing open source components such as virtualization, Netgraph, netem, bridging, Quagga, etc.
|
||||
#. 1,000,000 nodes - While the goal of CORE is to provide efficient, scalable network emulation, there is no set goal of N number of nodes. There are realistic limits on what a machine can handle as its resources are divided amongst virtual nodes. We will continue to make things more efficient and let the user determine the right number of nodes based on available hardware and the activities each node is performing.
|
||||
#. Solves every problem - CORE is about emulating networking layers 3-7 using virtual network stacks in the Linux or FreeBSD operating systems.
|
||||
#. Solves every problem - CORE is about emulating networking layers 3-7 using virtual network stacks in Linux operating systems.
|
||||
#. Hardware-specific - CORE itself is not an instantiation of hardware, a testbed, or a specific laboratory setup; it should run on commodity laptop and desktop PCs, in addition to high-end server hardware.
|
||||
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ The top question about the performance of CORE is often
|
|||
* Hardware - the number and speed of processors in the computer, the available
|
||||
processor cache, RAM memory, and front-side bus speed may greatly affect
|
||||
overall performance.
|
||||
* Operating system version - Linux or FreeBSD, and the specific kernel versions
|
||||
* Operating system version - distribution of Linux and the specific kernel versions
|
||||
used will affect overall performance.
|
||||
* Active processes - all nodes share the same CPU resources, so if one or more
|
||||
nodes is performing a CPU-intensive task, overall performance will suffer.
|
||||
|
@ -28,8 +28,8 @@ The top question about the performance of CORE is often
|
|||
* GUI usage - widgets that run periodically, mobility scenarios, and other GUI
|
||||
interactions generally consume CPU cycles that may be needed for emulation.
|
||||
|
||||
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running FreeBSD
|
||||
|BSDVERSION|, we have found it reasonable to run 30-75 nodes running
|
||||
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux,
|
||||
we have found it reasonable to run 30-75 nodes running
|
||||
OSPFv2 and OSPFv3 routing. On this hardware CORE can instantiate 100 or more
|
||||
nodes, but at that point it becomes critical as to what each of the nodes is
|
||||
doing.
|
||||
|
@ -38,7 +38,7 @@ doing.
|
|||
|
||||
Because this software is primarily a network emulator, the more appropriate
|
||||
question is *how much network traffic can it handle?* On the same 3.0GHz server
|
||||
described above, running FreeBSD 4.11, about 300,000 packets-per-second can be
|
||||
described above, running Linux, about 300,000 packets-per-second can be
|
||||
pushed through the system. The number of hops and the size of the packets is
|
||||
less important. The limiting factor is the number of times that the operating
|
||||
system needs to handle a packet. The 300,000 pps figure represents the number
|
||||
|
@ -52,9 +52,9 @@ throughput seen on the full length of the network path.
|
|||
|
||||
For a more detailed study of performance in CORE, refer to the following publications:
|
||||
|
||||
* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
|
||||
* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
|
||||
|
||||
* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
|
||||
* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
|
||||
|
||||
* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time network emulator, Proceedings of IEEE MILCOM Conference, 2008.
|
||||
* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time network emulator, Proceedings of IEEE MILCOM Conference, 2008.
|
||||
|
||||
|
|
173
doc/usage.rst
173
doc/usage.rst
|
@ -11,9 +11,9 @@ Using the CORE GUI
|
|||
|
||||
.. index:: how to use CORE
|
||||
|
||||
CORE can be used via the GUI or :ref:`Python_Scripting`.
|
||||
A typical emulation workflow is outlined in :ref:`emulation-workflow`.
|
||||
Often the GUI is used to draw nodes and network devices on the canvas.
|
||||
CORE can be used via the GUI or :ref:`Python_Scripting`.
|
||||
A typical emulation workflow is outlined in :ref:`emulation-workflow`.
|
||||
Often the GUI is used to draw nodes and network devices on the canvas.
|
||||
A Python script could also be written, that imports the CORE Python module, to configure and instantiate nodes and networks. This chapter primarily covers usage of the CORE GUI.
|
||||
|
||||
.. _emulation-workflow:
|
||||
|
@ -24,7 +24,7 @@ A Python script could also be written, that imports the CORE Python module, to c
|
|||
|
||||
Emulation Workflow
|
||||
|
||||
CORE can be customized to perform any action at each phase depicted in :ref:`emulation-workflow`. See the *Hooks...* entry on the
|
||||
CORE can be customized to perform any action at each phase depicted in :ref:`emulation-workflow`. See the *Hooks...* entry on the
|
||||
:ref:`Session_Menu`
|
||||
for details about when these session states are reached.
|
||||
|
||||
|
@ -43,13 +43,13 @@ mode. Nodes are drawn on a blank canvas using the toolbar on the left and
|
|||
configured from right-click menus or by double-clicking them. The GUI does not
|
||||
need to be run as root.
|
||||
|
||||
Once editing is complete, pressing the green `Start` button (or choosing `Execute` from the `Session` menu) instantiates the topology within the FreeBSD kernel and enters Execute mode. In execute mode, the user can interact with the running emulated machines by double-clicking or right-clicking on them. The editing toolbar disappears and is replaced by an execute toolbar, which provides tools while running the emulation. Pressing the red `Stop` button (or choosing `Terminate` from the `Session` menu) will destroy the running emulation and return CORE to Edit mode.
|
||||
Once editing is complete, pressing the green `Start` button (or choosing `Execute` from the `Session` menu) instantiates the topology within the Linux kernel and enters Execute mode. In execute mode, the user can interact with the running emulated machines by double-clicking or right-clicking on them. The editing toolbar disappears and is replaced by an execute toolbar, which provides tools while running the emulation. Pressing the red `Stop` button (or choosing `Terminate` from the `Session` menu) will destroy the running emulation and return CORE to Edit mode.
|
||||
|
||||
CORE can be started directly in Execute mode by specifying ``--start`` and a topology file on the command line:
|
||||
::
|
||||
|
||||
core-gui --start ~/.core/configs/myfile.imn
|
||||
|
||||
|
||||
|
||||
Once the emulation is running, the GUI can be closed, and a prompt will appear asking if the emulation should be terminated. The emulation may be left running and the GUI can reconnect to an existing session at a later time.
|
||||
|
||||
|
@ -62,8 +62,8 @@ There is also a **Batch** mode where CORE runs without the GUI and will instanti
|
|||
::
|
||||
|
||||
core-gui --batch ~/.core/configs/myfile.imn
|
||||
|
||||
A session running in batch mode can be accessed using the ``vcmd`` command (or ``vimage`` on FreeBSD), or the GUI can connect to the session.
|
||||
|
||||
A session running in batch mode can be accessed using the ``vcmd`` command, or the GUI can connect to the session.
|
||||
|
||||
.. index:: closebatch
|
||||
|
||||
|
@ -76,12 +76,12 @@ The session number is printed in the terminal when batch mode is started. This s
|
|||
If you forget the session number, you can always start the CORE GUI and use :ref:`Session_Menu` CORE sessions dialog box.
|
||||
|
||||
.. NOTE::
|
||||
It is quite easy to have overlapping sessions when running in batch mode. This may become a problem when control networks are employed in these sessions as there could be addressing conflicts. See :ref:`Control_Network` for remedies.
|
||||
|
||||
It is quite easy to have overlapping sessions when running in batch mode. This may become a problem when control networks are employed in these sessions as there could be addressing conflicts. See :ref:`Control_Network` for remedies.
|
||||
|
||||
|
||||
.. NOTE::
|
||||
If you like to use batch mode, consider writing a
|
||||
CORE :ref:`Python script <Python_Scripting>` directly.
|
||||
CORE :ref:`Python script <Python_Scripting>` directly.
|
||||
This enables access to the full power of the Python API.
|
||||
The :ref:`File_Menu` has a basic `Export Python Script` option for getting
|
||||
started with a GUI-designed topology.
|
||||
|
@ -92,8 +92,7 @@ The session number is printed in the terminal when batch mode is started. This s
|
|||
|
||||
.. index:: root privileges
|
||||
|
||||
The GUI can be run as a normal user on Linux. For FreeBSD, the GUI should be run
|
||||
as root in order to start an emulation.
|
||||
The GUI can be run as a normal user on Linux.
|
||||
|
||||
.. index:: port number
|
||||
|
||||
|
@ -204,7 +203,7 @@ sub-menus, which appear when you click on their group icon.
|
|||
wireless nodes based on the distance between them
|
||||
|
||||
* |rj45| *RJ45* - with the RJ45 Physical Interface Tool, emulated nodes can
|
||||
be linked to real physical interfaces on the Linux or FreeBSD machine;
|
||||
be linked to real physical interfaces;
|
||||
using this tool, real networks and devices can be physically connected to
|
||||
the live-running emulation (:ref:`RJ45_Tool`)
|
||||
|
||||
|
@ -330,7 +329,7 @@ File Menu
|
|||
|
||||
The File menu contains options for manipulating the :file:`.imn`
|
||||
:ref:`Configuration_Files`. Generally, these menu items should not be used in
|
||||
Execute mode (:ref:`Modes_of_Operation`.)
|
||||
Execute mode (:ref:`Modes_of_Operation`.)
|
||||
|
||||
.. index:: New
|
||||
|
||||
|
@ -340,7 +339,7 @@ Execute mode (:ref:`Modes_of_Operation`.)
|
|||
|
||||
* *Open* - invokes the File Open dialog box for selecting a new :file:`.imn`
|
||||
or XML file to open. You can change the default path used for this dialog
|
||||
in the :ref:`Preferences` Dialog.
|
||||
in the :ref:`Preferences` Dialog.
|
||||
|
||||
.. index:: Save
|
||||
|
||||
|
@ -349,16 +348,16 @@ Execute mode (:ref:`Modes_of_Operation`.)
|
|||
|
||||
.. index:: Save As XML
|
||||
|
||||
* *Save As XML* - invokes the Save As dialog box for selecting a new
|
||||
* *Save As XML* - invokes the Save As dialog box for selecting a new
|
||||
:file:`.xml` file for saving the current configuration in the XML file.
|
||||
See :ref:`Configuration_Files`.
|
||||
See :ref:`Configuration_Files`.
|
||||
|
||||
.. index:: Save As imn
|
||||
|
||||
* *Save As imn* - invokes the Save As dialog box for selecting a new
|
||||
:file:`.imn`
|
||||
topology file for saving the current configuration. Files are saved in the
|
||||
*IMUNES network configuration* file format described in
|
||||
*IMUNES network configuration* file format described in
|
||||
:ref:`Configuration_Files`.
|
||||
|
||||
.. index:: Export Python script
|
||||
|
@ -376,7 +375,7 @@ Execute mode (:ref:`Modes_of_Operation`.)
|
|||
.. index:: Execute Python script with options
|
||||
|
||||
* *Execute Python script with options* - invokes a File Open dialog box for selecting a
|
||||
Python script to run and automatically connect to. After a selection is made,
|
||||
Python script to run and automatically connect to. After a selection is made,
|
||||
a Python Script Options dialog box is invoked to allow for command-line options to be added.
|
||||
The Python script must create a new CORE Session and add this session to the daemon's list of sessions
|
||||
in order for this to work; see :ref:`Python_Scripting`.
|
||||
|
@ -386,7 +385,7 @@ Execute mode (:ref:`Modes_of_Operation`.)
|
|||
* *Open current file in editor* - this opens the current topology file in the
|
||||
``vim`` text editor. First you need to save the file. Once the file has been
|
||||
edited with a text editor, you will need to reload the file to see your
|
||||
changes. The text editor can be changed from the :ref:`Preferences` Dialog.
|
||||
changes. The text editor can be changed from the :ref:`Preferences` Dialog.
|
||||
|
||||
.. index:: Print
|
||||
.. index:: printing
|
||||
|
@ -434,7 +433,7 @@ Edit Menu
|
|||
* *Cut*, *Copy*, *Paste* - used to cut, copy, and paste a selection. When nodes
|
||||
are pasted, their node numbers are automatically incremented, and existing
|
||||
links are preserved with new IP addresses assigned. Services and their
|
||||
customizations are copied to the new node, but care should be taken as
|
||||
customizations are copied to the new node, but care should be taken as
|
||||
node IP addresses have changed with possibly old addresses remaining in any
|
||||
custom service configurations. Annotations may also be copied and pasted.
|
||||
|
||||
|
@ -503,7 +502,7 @@ The canvas menu provides commands for adding, removing, changing, and switching
|
|||
altitude reference point used to convert between geographic and Cartesian
|
||||
coordinate systems. By clicking the *Save as default* option, all new
|
||||
canvases will be created with these properties. The default canvas size can
|
||||
also be changed in the :ref:`Preferences` dialog box.
|
||||
also be changed in the :ref:`Preferences` dialog box.
|
||||
|
||||
* *Wallpaper...* - used for setting the canvas background image,
|
||||
:ref:`Customizing_your_Topology's_Look`.
|
||||
|
@ -538,12 +537,12 @@ canvas.
|
|||
.. index:: hide nodes
|
||||
|
||||
* *Show hidden nodes* - reveal nodes that have been hidden. Nodes are hidden by
|
||||
selecting one or more nodes, right-clicking one and choosing *hide*.
|
||||
selecting one or more nodes, right-clicking one and choosing *hide*.
|
||||
|
||||
.. index:: locked view
|
||||
|
||||
* *Locked* - toggles locked view; when the view is locked, nodes cannot be
|
||||
moved around on the canvas with the mouse. This could be useful when
|
||||
moved around on the canvas with the mouse. This could be useful when
|
||||
sharing the topology with someone and you do not expect them to change
|
||||
things.
|
||||
|
||||
|
@ -585,7 +584,7 @@ The tools menu lists different utility functions.
|
|||
.. index:: autorearrange selected
|
||||
|
||||
* *Autorearrange selected* - automatically arranges the selected nodes on the
|
||||
canvas.
|
||||
canvas.
|
||||
|
||||
.. index:: align to grid
|
||||
|
||||
|
@ -710,7 +709,7 @@ Here are some standard widgets:
|
|||
routing protocols. A line is drawn from each router halfway to the router ID
|
||||
of an adjacent router. The color of the line is based on the OSPF adjacency
|
||||
state such as Two-way or Full. To learn about the different colors, see the
|
||||
*Configure Adjacency...* menu item. The :file:`vtysh` command is used to
|
||||
*Configure Adjacency...* menu item. The :file:`vtysh` command is used to
|
||||
dump OSPF neighbor information.
|
||||
Only half of the line is drawn because each
|
||||
router may be in a different adjacency state with respect to the other.
|
||||
|
@ -724,11 +723,7 @@ Here are some standard widgets:
|
|||
link. If the throughput exceeds a certain threshold, the link will become
|
||||
highlighted. For wireless nodes which broadcast data to all nodes in range,
|
||||
the throughput rate is displayed next to the node and the node will become
|
||||
circled if the threshold is exceeded. *Note: under FreeBSD, the
|
||||
Throughput Widget will
|
||||
display "0.0 kbps" on all links that have no configured link effects, because
|
||||
of the way link statistics are counted; to fix this, add a small delay or a
|
||||
bandwidth limit to each link.*
|
||||
circled if the threshold is exceeded.
|
||||
|
||||
.. _Observer_Widgets:
|
||||
|
||||
|
@ -810,7 +805,7 @@ and options.
|
|||
of configured hooks, and buttons on the bottom left allow adding, editing,
|
||||
and removing hook scripts. The new or edit button will open a hook script
|
||||
editing window. A hook script is a shell script invoked on the host (not
|
||||
within a virtual node).
|
||||
within a virtual node).
|
||||
|
||||
The script is started at the session state specified in the drop down:
|
||||
|
||||
|
@ -818,14 +813,14 @@ and options.
|
|||
|
||||
* *configuration* - when the user presses the *Start* button, node, link, and
|
||||
other configuration data is sent to the backend. This state is also
|
||||
reached when the user customizes a service.
|
||||
reached when the user customizes a service.
|
||||
|
||||
* *instantiation* - after
|
||||
configuration data has been sent, just before the nodes are created.
|
||||
configuration data has been sent, just before the nodes are created.
|
||||
|
||||
* *runtime* - all nodes and networks have been
|
||||
built and are running. (This is the same state at which
|
||||
the previously-named *global experiment script* was run.)
|
||||
built and are running. (This is the same state at which
|
||||
the previously-named *global experiment script* was run.)
|
||||
|
||||
* *datacollect* - the user has pressed the
|
||||
*Stop* button, but before services have been stopped and nodes have been
|
||||
|
@ -837,18 +832,18 @@ and options.
|
|||
* *Reset node positions* - if you have moved nodes around
|
||||
using the mouse or by using a mobility module, choosing this item will reset
|
||||
all nodes to their original position on the canvas. The node locations are
|
||||
remembered when you first press the Start button.
|
||||
remembered when you first press the Start button.
|
||||
|
||||
* *Emulation servers...* - invokes the CORE emulation
|
||||
servers dialog for configuring :ref:`Distributed_Emulation`.
|
||||
|
||||
* *Change Sessions...* - invokes the Sessions dialog for switching between
|
||||
* *Change Sessions...* - invokes the Sessions dialog for switching between
|
||||
different
|
||||
running sessions. This dialog is presented during startup when one or
|
||||
more sessions are already running.
|
||||
|
||||
* *Options...* - presents per-session options, such as the IPv4 prefix to be
|
||||
used, if any, for a control network
|
||||
used, if any, for a control network
|
||||
(see :ref:`Communicating_with_the_Host_Machine`); the ability to preserve
|
||||
the session directory; and an on/off switch for SDT3D support.
|
||||
|
||||
|
@ -871,7 +866,7 @@ Connecting with Physical Networks
|
|||
|
||||
CORE's emulated networks run in real time, so they can be connected to live
|
||||
physical networks. The RJ45 tool and the Tunnel tool help with connecting to
|
||||
the real world. These tools are available from the *Link-layer nodes* menu.
|
||||
the real world. These tools are available from the *Link-layer nodes* menu.
|
||||
|
||||
When connecting two or more CORE emulations together, MAC address collisions
|
||||
should be avoided. CORE automatically assigns MAC addresses to interfaces when
|
||||
|
@ -893,7 +888,7 @@ with the CORE nodes in real time.
|
|||
The main drawback is that one physical interface is required for each
|
||||
connection. When the physical interface is assigned to CORE, it may not be used
|
||||
for anything else. Another consideration is that the computer or network that
|
||||
you are connecting to must be co-located with the CORE machine.
|
||||
you are connecting to must be co-located with the CORE machine.
|
||||
|
||||
To place an RJ45 connection, click on the *Link-layer nodes* toolbar and select
|
||||
the *RJ45 Tool* from the submenu. Click on the canvas near the node you want to
|
||||
|
@ -904,8 +899,8 @@ physical interface. A list of available interfaces will be shown, and one may
|
|||
be selected by double-clicking its name in the list, or an interface name may
|
||||
be entered into the text box.
|
||||
|
||||
.. NOTE::
|
||||
When you press the Start button to instantiate your topology, the
|
||||
.. NOTE::
|
||||
When you press the Start button to instantiate your topology, the
|
||||
interface assigned to the RJ45 will be connected to the CORE topology. The
|
||||
interface can no longer be used by the system. For example, if there was an
|
||||
IP address assigned to the physical interface before execution, the address
|
||||
|
@ -925,7 +920,7 @@ physical ports are available, but the (e.g. switching) hardware connected to
|
|||
the physical port must support the VLAN tagging, and the available bandwidth
|
||||
will be shared.
|
||||
|
||||
You need to create separate VLAN virtual devices on the Linux or FreeBSD host,
|
||||
You need to create separate VLAN virtual devices on the Linux host,
|
||||
and then assign these devices to RJ45 nodes inside of CORE. The VLANning is
|
||||
actually performed outside of CORE, so when the CORE emulated node receives a
|
||||
packet, the VLAN tag will already be removed.
|
||||
|
@ -953,15 +948,15 @@ Tunneling can be helpful when the number of physical interfaces is limited or
|
|||
when the peer is located on a different network. Also a physical interface does
|
||||
not need to be dedicated to CORE as with the RJ45 tool.
|
||||
|
||||
The peer GRE tunnel endpoint may be another CORE machine or a (Linux, FreeBSD,
|
||||
etc.) host that supports GRE tunneling. When placing a Tunnel node, initially
|
||||
The peer GRE tunnel endpoint may be another CORE machine or another
|
||||
host that supports GRE tunneling. When placing a Tunnel node, initially
|
||||
the node will display "UNASSIGNED". This text should be replaced with the IP
|
||||
address of the tunnel peer. This is the IP address of the other CORE machine or
|
||||
physical machine, not an IP address of another virtual node.
|
||||
|
||||
.. NOTE::
|
||||
Be aware of possible MTU issues with GRE devices. The *gretap* device
|
||||
has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
|
||||
has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
|
||||
bridge's MTU
|
||||
becomes 1,458 bytes. The Linux bridge will not perform fragmentation for
|
||||
large packets if other bridge ports have a higher MTU such as 1,500 bytes.
|
||||
|
@ -977,7 +972,7 @@ used.
|
|||
.. index:: ip link command
|
||||
|
||||
Here are example commands for building the other end of a tunnel on a Linux
|
||||
machine. In this example, a router in CORE has the virtual address
|
||||
machine. In this example, a router in CORE has the virtual address
|
||||
``10.0.0.1/24`` and the CORE host machine has the (real) address
|
||||
``198.51.100.34/24``. The Linux box
|
||||
that will connect with the CORE machine is reachable over the (real) network
|
||||
|
@ -989,7 +984,7 @@ an address from the subnet of the virtual router node,
|
|||
``10.0.0.2/24``.
|
||||
|
||||
::
|
||||
|
||||
|
||||
# these commands are run on the tunnel peer
|
||||
sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1
|
||||
sudo ip addr add 10.0.0.2/24 dev gt0
|
||||
|
@ -1053,7 +1048,7 @@ the node, and SSH with X11 forwarding can be used from the host to the node:
|
|||
ssh -X 172.16.0.5 xclock
|
||||
|
||||
Note that the :file:`coresendmsg` utility can be used for a node to send
|
||||
messages to the CORE daemon running on the host (if the ``listenaddr = 0.0.0.0``
|
||||
messages to the CORE daemon running on the host (if the ``listenaddr = 0.0.0.0``
|
||||
is set in the :file:`/etc/core/core.conf` file) to interact with the running
|
||||
emulation. For example, a node may move itself or other nodes, or change
|
||||
its icon based on some node state.
|
||||
|
@ -1108,7 +1103,7 @@ Wired Networks
|
|||
|
||||
Wired networks are created using the *Link Tool* to draw a link between two
|
||||
nodes. This automatically draws a red line representing an Ethernet link and
|
||||
creates new interfaces on network-layer nodes.
|
||||
creates new interfaces on network-layer nodes.
|
||||
|
||||
.. index:: link configuration
|
||||
|
||||
|
@ -1124,12 +1119,11 @@ link, affecting its display.
|
|||
.. index:: lanswitch
|
||||
|
||||
Link-layer nodes are provided for modeling wired networks. These do not create
|
||||
a separate network stack when instantiated, but are implemented using bridging
|
||||
(Linux) or Netgraph nodes (FreeBSD). These are the hub, switch, and wireless
|
||||
LAN nodes. The hub copies each packet from the incoming link to every connected
|
||||
link, while the switch behaves more like an Ethernet switch and keeps track of
|
||||
the Ethernet address of the connected peer, forwarding unicast traffic only to
|
||||
the appropriate ports.
|
||||
a separate network stack when instantiated, but are implemented using Linux bridging.
|
||||
These are the hub, switch, and wireless LAN nodes. The hub copies each packet from
|
||||
the incoming link to every connected link, while the switch behaves more like an
|
||||
Ethernet switch and keeps track of the Ethernet address of the connected peer,
|
||||
forwarding unicast traffic only to the appropriate ports.
|
||||
|
||||
The wireless LAN (WLAN) is covered in the next section.
|
||||
|
||||
|
@ -1158,13 +1152,13 @@ on platform. See the table below for a brief overview of wireless model types.
|
|||
============= ===================== ======== ==================================================================
|
||||
Model Type Supported Platform(s) Fidelity Description
|
||||
============= ===================== ======== ==================================================================
|
||||
Basic on/off Linux, FreeBSD Low Linux Ethernet bridging with ebtables (Linux) or ng_wlan (FreeBSD)
|
||||
Basic on/off Linux Low Linux Ethernet bridging with ebtables
|
||||
EMANE Plug-in Linux High TAP device connected to EMANE emulator with pluggable MAC and PHY radio types
|
||||
============= ===================== ======== ==================================================================
|
||||
|
||||
|
||||
To quickly build a wireless network, you can first place several router nodes
|
||||
onto the canvas. If you have the
|
||||
onto the canvas. If you have the
|
||||
:ref:`Quagga MDR software <Quagga_Routing_Software>` installed, it is
|
||||
recommended that you use the *mdr* node type for reduced routing overhead. Next
|
||||
choose the *wireless LAN* from the *Link-layer nodes* submenu. First set the
|
||||
|
@ -1198,8 +1192,6 @@ dragging them, and wireless links will be dynamically made or broken.
|
|||
The *EMANE* tab lists available EMANE models to use for wireless networking.
|
||||
See the :ref:`EMANE` chapter for details on using EMANE.
|
||||
|
||||
On FreeBSD, the WLAN node is realized using the *ng_wlan* Netgraph node.
|
||||
|
||||
.. _Mobility_Scripting:
|
||||
|
||||
Mobility Scripting
|
||||
|
@ -1213,7 +1205,7 @@ Mobility Scripting
|
|||
|
||||
.. index:: mobility scripting
|
||||
|
||||
CORE has a few ways to script mobility.
|
||||
CORE has a few ways to script mobility.
|
||||
|
||||
* ns-2 script - the script specifies either absolute positions
|
||||
or waypoints with a velocity. Locations are given with Cartesian coordinates.
|
||||
|
@ -1226,7 +1218,7 @@ CORE has a few ways to script mobility.
|
|||
|
||||
For the first method, you can create a mobility script using a text
|
||||
editor, or using a tool such as `BonnMotion <http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/>`_, and associate the script with one of the wireless
|
||||
using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
|
||||
using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
|
||||
button, and set the *mobility script file* field in the resulting *ns2script*
|
||||
configuration dialog.
|
||||
|
||||
|
@ -1254,11 +1246,11 @@ The format of an ns-2 mobility script looks like:
|
|||
$node_(2) set Y_ 240.0
|
||||
$node_(2) set Z_ 0.00
|
||||
$ns_ at 1.00 "$node_(2) setdest 130.0 280.0 15.0"
|
||||
|
||||
|
||||
|
||||
The first three lines set an initial position for node 2. The last line in the
|
||||
above example causes node 2 to move towards the destination `(130, 280)` at
|
||||
speed `15`. All units are screen coordinates, with speed in units per second.
|
||||
speed `15`. All units are screen coordinates, with speed in units per second.
|
||||
The
|
||||
total script time is learned after all nodes have reached their waypoints.
|
||||
Initially, the time slider in the mobility script dialog will not be
|
||||
|
@ -1305,13 +1297,12 @@ Distributed Emulation
|
|||
A large emulation scenario can be deployed on multiple emulation servers and
|
||||
controlled by a single GUI. The GUI, representing the entire topology, can be
|
||||
run on one of the emulation servers or on a separate machine. Emulations can be
|
||||
distributed on Linux, while tunneling support has not been added yet for
|
||||
FreeBSD.
|
||||
distributed on Linux.
|
||||
|
||||
Each machine that will act as an emulation server needs to have CORE installed.
|
||||
It is not important to have the GUI component but the CORE Python daemon
|
||||
:file:`core-daemon` needs to be installed. Set the ``listenaddr`` line in the
|
||||
:file:`/etc/core/core.conf` configuration file so that the CORE Python
|
||||
:file:`/etc/core/core.conf` configuration file so that the CORE Python
|
||||
daemon will respond to commands from other servers:
|
||||
::
|
||||
|
||||
|
@ -1320,7 +1311,7 @@ daemon will respond to commands from other servers:
|
|||
pidfile = /var/run/core-daemon.pid
|
||||
logfile = /var/log/core-daemon.log
|
||||
listenaddr = 0.0.0.0
|
||||
|
||||
|
||||
|
||||
The ``listenaddr`` should be set to the address of the interface that should
|
||||
receive CORE API control commands from the other servers; setting ``listenaddr
|
||||
|
@ -1356,19 +1347,19 @@ Servers are configured by choosing *Emulation servers...* from the *Session*
|
|||
menu. Servers parameters are configured in the list below and stored in a
|
||||
*servers.conf* file for use in different scenarios. The IP address and port of
|
||||
the server must be specified. The name of each server will be saved in the
|
||||
topology file as each node's location.
|
||||
topology file as each node's location.
|
||||
|
||||
.. NOTE::
|
||||
The server that the GUI connects with
|
||||
is referred to as the master server.
|
||||
is referred to as the master server.
|
||||
|
||||
|
||||
The user needs to assign nodes to emulation servers in the scenario. Making no
|
||||
assignment means the node will be emulated on the master server
|
||||
In the configuration window of every node, a drop-down box located between
|
||||
the *Node name* and the *Image* button will select the name of the emulation
|
||||
server. By default, this menu shows *(none)*, indicating that the node will
|
||||
be emulated locally on the master. When entering Execute mode, the CORE GUI
|
||||
assignment means the node will be emulated on the master server
|
||||
In the configuration window of every node, a drop-down box located between
|
||||
the *Node name* and the *Image* button will select the name of the emulation
|
||||
server. By default, this menu shows *(none)*, indicating that the node will
|
||||
be emulated locally on the master. When entering Execute mode, the CORE GUI
|
||||
will deploy the node on its assigned emulation server.
|
||||
|
||||
Another way to assign emulation servers is to select one or more nodes using
|
||||
|
@ -1395,7 +1386,7 @@ If there is a link between two nodes residing on different servers, the GUI
|
|||
will draw the link with a dashed line, and automatically create necessary
|
||||
tunnels between the nodes when executed. Care should be taken to arrange the
|
||||
topology such that the number of tunnels is minimized. The tunnels carry data
|
||||
between servers to connect nodes as specified in the topology.
|
||||
between servers to connect nodes as specified in the topology.
|
||||
These tunnels are created using GRE tunneling, similar to the
|
||||
:ref:`Tunnel_Tool`.
|
||||
|
||||
|
@ -1561,7 +1552,7 @@ service. Generally they send a kill signal to the running process using the
|
|||
*kill* or *killall* commands. If the service does not terminate
|
||||
the running processes using a shutdown command, the processes will be killed
|
||||
when the *vnoded* daemon is terminated (with *kill -9*) and
|
||||
the namespace destroyed. It is a good practice to
|
||||
the namespace destroyed. It is a good practice to
|
||||
specify shutdown commands, which will allow for proper process termination, and
|
||||
for run-time control of stopping and restarting services.
|
||||
|
||||
|
@ -1606,7 +1597,7 @@ in the :file:`/etc/core/core.conf` configuration file. A sample is provided in
|
|||
the :file:`myservices/` directory.
|
||||
|
||||
.. NOTE::
|
||||
The directory name used in `custom_services_dir` should be unique and
|
||||
The directory name used in `custom_services_dir` should be unique and
|
||||
should not correspond to
|
||||
any existing Python module name. For example, don't use the name `subprocess`
|
||||
or `services`.
|
||||
|
@ -1641,7 +1632,7 @@ create a bridge or namespace, or the failure to launch EMANE processes for an
|
|||
EMANE-based network.
|
||||
|
||||
Clicking on an exception displays details for that
|
||||
exception. If a node number is specified, that node is highlighted on the
|
||||
exception. If a node number is specified, that node is highlighted on the
|
||||
canvas when the exception is selected. The exception source is a text string
|
||||
to help trace where the exception occurred; "service:UserDefined" for example,
|
||||
would appear for a failed validation command with the UserDefined service.
|
||||
|
@ -1654,7 +1645,7 @@ list and for viewing the CORE daemon and node log files.
|
|||
.. index:: CEL batch mode
|
||||
|
||||
.. NOTE::
|
||||
In batch mode, exceptions received from the CORE daemon are displayed on
|
||||
In batch mode, exceptions received from the CORE daemon are displayed on
|
||||
the console.
|
||||
|
||||
.. _Configuration_Files:
|
||||
|
@ -1668,16 +1659,16 @@ Configuration Files
|
|||
|
||||
Configurations are saved to :file:`xml` or :file:`.imn` topology files using
|
||||
the *File* menu. You
|
||||
can easily edit these files with a text editor.
|
||||
can easily edit these files with a text editor.
|
||||
Any time you edit the topology
|
||||
file, you will need to stop the emulation if it were running and reload the
|
||||
file.
|
||||
|
||||
The :file:`.xml` `file schema is specified by NRL <http://www.nrl.navy.mil/itd/ncs/products/mnmtools>`_ and there are two versions to date:
|
||||
version 0.0 and version 1.0,
|
||||
with 1.0 as the current default. CORE can open either XML version. However, the
|
||||
xmlfilever line in :file:`/etc/core/core.conf` controls the version of the XML file
|
||||
that CORE will create.
|
||||
The :file:`.xml` `file schema is specified by NRL <http://www.nrl.navy.mil/itd/ncs/products/mnmtools>`_ and there are two versions to date:
|
||||
version 0.0 and version 1.0,
|
||||
with 1.0 as the current default. CORE can open either XML version. However, the
|
||||
xmlfilever line in :file:`/etc/core/core.conf` controls the version of the XML file
|
||||
that CORE will create.
|
||||
|
||||
.. index:: Scenario Plan XML
|
||||
|
||||
|
@ -1685,7 +1676,7 @@ In version 1.0, the XML file is also referred to as the Scenario Plan. The Scena
|
|||
made up of the following:
|
||||
|
||||
|
||||
* `Network Plan` - describes nodes, hosts, interfaces, and the networks to
|
||||
* `Network Plan` - describes nodes, hosts, interfaces, and the networks to
|
||||
which they belong.
|
||||
* `Motion Plan` - describes position and motion patterns for nodes in an
|
||||
emulation.
|
||||
|
@ -1694,7 +1685,7 @@ made up of the following:
|
|||
* `Visualization Plan` - meta-data that is not part of the NRL XML schema but
|
||||
used only by CORE. For example, GUI options, canvas and annotation info, etc.
|
||||
are contained here.
|
||||
* `Test Bed Mappings` - describes mappings of nodes, interfaces and EMANE modules in the scenario to
|
||||
* `Test Bed Mappings` - describes mappings of nodes, interfaces and EMANE modules in the scenario to
|
||||
test bed hardware.
|
||||
CORE includes Test Bed Mappings in XML files that are saved while the scenario is running.
|
||||
|
||||
|
@ -1710,7 +1701,7 @@ indentation is one tab character.
|
|||
.. tip::
|
||||
There are several topology examples included with CORE in
|
||||
the :file:`configs/` directory.
|
||||
This directory can be found in :file:`~/.core/configs`, or
|
||||
This directory can be found in :file:`~/.core/configs`, or
|
||||
installed to the filesystem
|
||||
under :file:`/usr[/local]/share/examples/configs`.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue