doc:updated CORE manual

This commit is contained in:
tgoff0 2015-05-22 00:56:29 +00:00
parent 5838814f07
commit e678c5231b
12 changed files with 201 additions and 146 deletions

View file

@ -13,9 +13,9 @@ SUBDIRS = man figures
# extra cruft to remove
DISTCLEANFILES = Makefile.in stamp-vti
rst_files = conf.py constants.txt credits.rst devguide.rst emane.rst \
index.rst install.rst intro.rst machine.rst ns3.rst \
performance.rst scripting.rst usage.rst
rst_files = conf.py constants.txt credits.rst ctrlnet.rst devguide.rst \
emane.rst index.rst install.rst intro.rst machine.rst \
ns3.rst performance.rst scripting.rst usage.rst
EXTRA_DIST = $(rst_files) _build _static _templates

View file

@ -17,10 +17,10 @@ contributors.
Jeff Ahrenholz <jeffrey.m.ahrenholz@boeing.com> has been the primary Boeing
developer of CORE, and has written this manual. Tom Goff
<thomas.goff@boeing.com> designed the Python framework and has made significant
contributions. Claudiu Danilov <claudiu.b.danilov@boeing.com>, Gary Pei
<guangyu.pei@boeing.com>, Phil Spagnolo, and Ian Chakeres have contributed code
to CORE. Dan Mackley <daniel.c.mackley@boeing.com> helped develop the CORE API,
originally to interface with a simulator. Jae Kim <jae.h.kim@boeing.com> and
Tom Henderson <thomas.r.henderson@boeing.com> have supervised the project and
provided direction.
contributions. Claudiu Danilov <claudiu.b.danilov@boeing.com>, Rod Santiago,
Kevin Larson, Gary Pei <guangyu.pei@boeing.com>, Phil Spagnolo, and Ian Chakeres
have contributed code to CORE. Dan Mackley <daniel.c.mackley@boeing.com> helped
develop the CORE API, originally to interface with a simulator.
Jae Kim <jae.h.kim@boeing.com> and Tom Henderson <thomas.r.henderson@boeing.com>
have supervised the project and provided direction.

View file

@ -40,12 +40,22 @@ emulates layers 1 and 2 (physical and data link) using its pluggable PHY and
MAC models.
The interface between CORE and EMANE is a TAP device. CORE builds the virtual
node using Linux network namespaces, and installs the TAP device into the
namespace. EMANE binds a userspace socket to the device, on the host before it
is pushed into the namespace, for sending and receiving data. The *Virtual
Transport* is the EMANE component responsible for connecting with the TAP
device.
node using Linux network namespaces, installs the TAP device into the
namespace and instantiates one EMANE process in the namespace.
The EMANE process binds a user space socket to the TAP device for
sending and receiving data from CORE.
.. NOTE::
When the installed EMANE version is older than 0.9.2, EMANE runs on the host
and binds a userspace socket to the TAP device, before it is pushed into the
namespace, for sending and receiving data. The *Virtual Transport* was
the EMANE component responsible for connecting with the TAP device.
An EMANE instance sends and receives OTA traffic to and from other
EMANE instances via a control port (e.g. ``ctrl0``, ``ctrl1``).
It also sends and receives Events to and from the Event Service using
the same or a different control port.
EMANE models are configured through CORE's WLAN configuration dialog. A
corresponding EmaneModel Python class is sub-classed for each supported EMANE
model, to provide configuration items and their mapping to XML files. This way
@ -135,9 +145,16 @@ Single PC with EMANE
This section describes running CORE and EMANE on a single machine. This is the
default mode of operation when building an EMANE network with CORE. The OTA
manager interface is off and the virtual nodes use the loopback device for
communicating with one another. This prevents your emulation session from
sending data on your local network and interfering with other EMANE users.
manager and Event service interface are set to use ``ctrl0`` and the virtual nodes
use the primary control channel for communicating with one another. The primary
control channel is automatically activated when a scenario involves EMANE.
Using the primary control channel prevents your emulation session from sending
multicast traffic on your local network and interfering with other EMANE users.
.. NOTE::
When the installed EMANE version is earlier than 0.9.2, the OTA manager and
Event service interfaces are set to use the loopback device.
EMANE is configured through a WLAN node, because it is all about emulating
wireless radio networks. Once a node is linked to a WLAN cloud configured with
@ -197,23 +214,33 @@ be used to achieve geo-location accuracy in this situation.
Clicking the green *Start* button launches the emulation and causes TAP
devices to be created in the virtual nodes that are linked to the EMANE WLAN.
These devices appear with interface names such as eth0, eth1, etc. The EMANE
daemons should now be running on the host:
processes should now be running in each namespace. For a four node scenario:
::
> ps -aef | grep emane
root 10472 1 1 12:57 ? 00:00:00 emane --logl 0 platform.xml
root 10526 1 1 12:57 ? 00:00:00 emanetransportd --logl 0 tr
The above example shows the *emane* and *emanetransportd* daemons started by
CORE. To view the configuration generated by CORE, look in the
> ps -aef | grep emane
root 1063 969 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane4.log /tmp/pycore.59992/platform4.xml
root 1117 959 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane2.log /tmp/pycore.59992/platform2.xml
root 1179 942 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane1.log /tmp/pycore.59992/platform1.xml
root 1239 979 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane5.log /tmp/pycore.59992/platform5.xml
The example above shows the EMANE processes started by CORE. To view the configuration generated by CORE, look in the
:file:`/tmp/pycore.nnnnn/` session directory for a :file:`platform.xml` file
and other XML files. One easy way to view this information is by
double-clicking one of the virtual nodes, and typing *cd ..* in the shell to go
up to the session directory.
When EMANE is used to network together CORE nodes, no Ethernet bridging device
is used. The Virtual Transport creates a TAP device that is installed into the
network namespace container, so no corresponding device is visible on the host.
.. _single_pc_emane_figure:
.. figure:: figures/single-pc-emane.*
:alt: Single PC Emane
:align: center
:scale: 75%
Single PC with EMANE
.. index:: Distributed_EMANE
.. _Distributed_EMANE:
@ -226,15 +253,19 @@ Running CORE and EMANE distributed among two or more emulation servers is
similar to running on a single machine. There are a few key configuration items
that need to be set in order to be successful, and those are outlined here.
Because EMANE uses a multicast channel to disseminate data to all NEMs, it is
a good idea to maintain separate networks for data and control. The control
network may be a shared laboratory network, for example, but you do not want
multicast traffic on the data network to interfere with other EMANE users.
The examples described here will use *eth0* as a control interface
It is a good idea to maintain separate networks for data (OTA) and control. The control
network may be a shared laboratory network, for example, and you do not want
multicast traffic on the data network to interfere with other EMANE users. Furthermore,
control traffic could interfere with the OTA latency and thoughput and might affect
emulation fidelity. The examples described here will use *eth0* as a control interface
and *eth1* as a data interface, although using separate interfaces
is not strictly required. Note that these interface names refer to interfaces
present on the host machine, not virtual interfaces within a node.
.. IMPORTANT::
If an auxiliary control network is used, an interface on the host has to be assigned to that network.
See :ref:`Distributed_Control_Network`
Each machine that will act as an emulation server needs to have CORE and EMANE
installed. Refer to the :ref:`Distributed_Emulation` section for configuring
CORE.
@ -255,15 +286,35 @@ turn connects to the other emulation server "slaves". Public key SSH should
be configured from the master to the slaves as mentioned in the
:ref:`Distributed_Emulation` section.
The EMANE models can be configured as described in :ref:`Single_PC_with_EMANE`.
Under the *EMANE* tab of the EMANE WLAN, click on the *EMANE options* button.
This brings
up the emane configuration dialog. The *enable OTA Manager channel* should
be set to *on*. The *OTA Manager device* and *Event Service device* should
be set to something other than the loopback *lo* device. For example, if eth0
is your control device and eth1 is for data, set the OTA Manager device to eth1
and the Event Service device to eth0. Click *Apply* to
save these settings.
be set to a control network device. For example, if you have
a primary and auxiliary control network (i.e. controlnet and controlnet1), and you want
the OTA traffic to have its dedicated network, set the OTA Manager device to ``ctrl1``
and the Event Service device to ``ctrl0``.
The EMANE models can be configured as described in :ref:`Single_PC_with_EMANE`.
Click *Apply* to save these settings.
.. _distributed_emane_figure:
.. figure:: figures/distributed-emane-configuration.*
:alt: Distribute EMANE
:align: center
:scale: 75%
Distributed EMANE Configuration
.. NOTE::
When the installed EMANE version is earlier than 0.9.2, EMANE access to the host machine's
interfaces and OTA manager and Event service devices an be set to physical interfaces.
.. HINT::
Here is a quick checklist for distributed emulation with EMANE.
@ -287,12 +338,9 @@ file and other EMANE XML files are generated. The NEM IDs are automatically
coordinated across servers so there is no overlap. Each server also gets its
own Platform ID.
Instead of using the loopback device for disseminating multicast
EMANE events, an Ethernet device is used as specified in the
*configure emane* dialog.
EMANE's Event Service can be run with mobility or pathloss scripts
as described in
:ref:`Single_PC_with_EMANE`. If CORE is not subscribed to location events, it
An Ethernet device is used for disseminating multicast EMANE events, as specified in the
*configure emane* dialog. EMANE's Event Service can be run with mobility or pathloss scripts
as described in :ref:`Single_PC_with_EMANE`. If CORE is not subscribed to location events, it
will generate them as nodes are moved on the canvas.
Double-clicking on a node during runtime will cause the GUI to attempt to SSH
@ -301,3 +349,11 @@ key SSH configuration should be tested with all emulation servers prior to
starting the emulation.
.. _distributed_emane_network_diagram:
.. figure:: figures/distributed-emane-network.*
:alt: Distribute EMANE
:align: center
:scale: 75%
Notional Distributed EMANE Network Diagram

View file

@ -36,7 +36,7 @@ figures_jpg = $(figures:%=%.jpg)
# icons from the GUI source
icons = select start router host pc mdr router_green \
lanswitch hub wlan \
lanswitch hub wlan cel \
link rj45 tunnel marker oval rectangle text \
stop observe plot twonode run document-properties
# list of icons + .gif.jpg suffix

BIN
doc/figures/controlnetwork.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
doc/figures/single-pc-emane.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

View file

@ -1,5 +1,5 @@
.. This file is part of the CORE Manual
(c)2012 the Boeing Company
(c)2012,2015 the Boeing Company
.. only:: html or latex
@ -16,6 +16,7 @@ CORE Manual
usage
scripting
machine
ctrlnet
emane
ns3
performance

View file

@ -29,6 +29,19 @@ CORE is typically used for network and protocol research,
demonstrations, application and platform testing, evaluating networking
scenarios, security studies, and increasing the size of physical test networks.
What's New?
=================
For readers who are already familiar with CORE and have read this manual before, below is a list of what changed in version 4.8:
* :ref:`Configuration_Files` - a new XML format has been defined by the U.S. Naval Research Lab (NRL) for the Network Management Framework. .
* :ref:`EMANE` - `Release 0.9.2 of EMANE <https://github.com/adjacentlink/emane/wiki/Release-Notes#092>`_ included a new capability that, in order to be leveraged, needs changes on how it is deployed by CORE. The EMANE section of this document has been updated with new method of connecting together the deployed instances.
* :ref:`Control_Network` - with EMANE 0.9.2, the CORE control network has become an important component of CORE. Auxiliary control networks have been added to the primary control network to host EMANE traffic. As a result, the discussion on the control network has been elevated to a top level topic.
* `Tips, Hints, Important Information` - miscellaneous information added to several chapters in the document.
.. index::
single: CORE; components of
single: CORE; API
@ -68,6 +81,7 @@ machines.
.. figure:: figures/core-architecture.*
:alt: CORE architecture diagram
:align: center
:scale: 75 %
CORE Architecture

View file

@ -1,5 +1,5 @@
.. This file is part of the CORE Manual
(c)2012 the Boeing Company
(c)2012-2015 the Boeing Company
.. _Using_the_CORE_GUI:
@ -53,10 +53,11 @@ CORE can be started directly in Execute mode by specifying ``--start`` and a top
Once the emulation is running, the GUI can be closed, and a prompt will appear asking if the emulation should be terminated. The emulation may be left running and the GUI can reconnect to an existing session at a later time.
.. index:: Batch mode
.. index:: batch mode
.. index:: batch
There is also a **Batch** mode where CORE runs without the GUI and will instantiate a topology from a given file. This is similar to the ``--start`` option, except that the GUI is not used:
::
@ -71,6 +72,12 @@ The session number is printed in the terminal when batch mode is started. This s
core-gui --closebatch 12345
.. TIP::
If you forget the session number, you can always start the CORE GUI and use :ref:`Session_Menu` CORE sessions dialog box.
.. NOTE::
It is quite easy to have overlapping sessions when running in batch mode. This may become a problem when control networks are employed in these sessions as there could be addressing conflicts. See :ref:`Control_Network` for remedies.
.. NOTE::
If you like to use batch mode, consider writing a
@ -92,8 +99,10 @@ as root in order to start an emulation.
The GUI can be connected to a different address or TCP port using
the ``--address`` and/or ``--port`` options. The defaults are shown below.
::
core-gui --address 127.0.0.1 --port 4038
core-gui --address 127.0.0.1 --port 4038
.. _Toolbar:
@ -330,7 +339,7 @@ Execute mode (:ref:`Modes_of_Operation`.)
.. index:: Open
* *Open* - invokes the File Open dialog box for selecting a new :file:`.imn`
topology file to open. You can change the default path used for this dialog
or XML file to open. You can change the default path used for this dialog
in the :ref:`Preferences` Dialog.
.. index:: Save
@ -341,16 +350,15 @@ Execute mode (:ref:`Modes_of_Operation`.)
.. index:: Save As XML
* *Save As XML* - invokes the Save As dialog box for selecting a new
:file:`.xml` scenario file for saving the current configuration.
This format includes a Network Plan, Motion Plan, Services Plan, and more
within a `Scenario` XML tag, described in :ref:`Configuration_Files`.
:file:`.xml` file for saving the current configuration in the XML file.
See :ref:`Configuration_Files`.
.. index:: Save As imn
* *Save As imn* - invokes the Save As dialog box for selecting a new
:file:`.imn`
topology file for saving the current configuration. Files are saved in the
*IMUNES network configuration* file format described in
*IMUNES network configuration* file format described in
:ref:`Configuration_Files`.
.. index:: Export Python script
@ -358,13 +366,21 @@ Execute mode (:ref:`Modes_of_Operation`.)
* *Export Python script* - prints Python snippets to the console, for inclusion
in a CORE Python script.
.. index:: Execute Python script
.. index:: Execute XML or Python script
* *Execute Python script* - invokes a File Open dialog fox for selecting a
Python script to run and automatically connect to. The script must create
* *Execute XML or Python script* - invokes a File Open dialog box for selecting an XML file to run or a
Python script to run and automatically connect to. If a Python script, the script must create
a new CORE Session and add this session to the daemon's list of sessions
in order for this to work; see :ref:`Python_Scripting`.
.. index:: Execute Python script with options
* *Execute Python script with options* - invokes a File Open dialog box for selecting a
Python script to run and automatically connect to. After a selection is made,
a Python Script Options dialog box is invoked to allow for command-line options to be added.
The Python script must create a new CORE Session and add this session to the daemon's list of sessions
in order for this to work; see :ref:`Python_Scripting`.
.. index:: Open current file in editor
* *Open current file in editor* - this opens the current topology file in the
@ -759,6 +775,8 @@ and options.
.. index:: CORE Sessions Dialog
.. _CORE_Sessions_Dialog:
* *Change sessions...* - invokes the CORE Sessions dialog box containing a list
of active CORE sessions in the daemon. Basic session information such as
name, node count, start time, and a thumbnail are displayed. This dialog
@ -1004,6 +1022,12 @@ firewall is not blocking the GRE traffic.
Communicating with the Host Machine
-----------------------------------
The host machine that runs the CORE GUI and/or daemon is not necessarily
accessible from a node. Running an X11 application on a node, for example,
requires some channel of communication for the application to connect with
the X server for graphical display. There are several different ways to
connect from the node to the host and vice versa.
Control Network
^^^^^^^^^^^^^^^
@ -1012,42 +1036,14 @@ Control Network
.. index:: control network
.. index:: X11 applications
.. index:: node access to the host
.. index:: host access to a node
The host machine that runs the CORE GUI and/or daemon is not necessarily
accessible from a node. Running an X11 application on a node, for example,
requires some channel of communication for the application to connect with
the X server for graphical display. There are several different ways to
connect from the node to the host and vice versa.
Under the :ref:`Session_Menu`, the *Options...* dialog has an option to set
a *control network prefix*.
This can be set to a network prefix such as
``172.16.0.0/24``. A bridge will be created on the host machine having the last
address in the prefix range (e.g. ``172.16.0.254``), and each node will have
an extra ``ctrl0`` control interface configured with an address corresponding
to its node number (e.g. ``172.16.0.3`` for ``n3``.)
A default value for the control network may also
be specified by setting the ``controlnet`` line in the
:file:`/etc/core/core.conf` configuration file which new
sessions will use by default. For multiple sessions at once, the session
option should be used instead of the :file:`core.conf` default.
.. NOTE::
If you have a large scenario with more than 253 nodes, use a control
network prefix that allows more than the suggested ``/24``, such as ``/23``
or greater.
The quickest way to connect with the host machine through the primary control network. See :ref:`Activating_the_Primary_Control_Network`.
.. index:: X11 forwarding
.. index:: SSH X11 forwarding
With a control network, the host can launch an X11 application on a node.
To run an X11 application on the node, the ``SSH`` service can be enabled on
the node, and SSH with X11 forwarding can be used from the host to the node:
@ -1057,52 +1053,12 @@ the node, and SSH with X11 forwarding can be used from the host to the node:
ssh -X 172.16.0.5 xclock
Note that the :file:`coresendmsg` utility can be used for a node to send
messages to the CORE daemon running on the host (if the ``listenaddr = 0.0.0.0`` is set in the :file:`/etc/core/core.conf` file) to interact with the running
messages to the CORE daemon running on the host (if the ``listenaddr = 0.0.0.0``
is set in the :file:`/etc/core/core.conf` file) to interact with the running
emulation. For example, a node may move itself or other nodes, or change
its icon based on some node state.
Control Networks with Distributed
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index:: distributed control network
.. index:: control network distributed
When a control network is defined for a distributed session, a control network
bridge will be created on each of the slave servers, with GRE tunnels back
to the master server's bridge. The slave control bridges are not assigned an
address. From the host, any of the nodes (local or remote) can be accessed,
just like the single server case.
In some situations, remote emulated nodes need to communicate with the
host on which they are running and not the master server.
Multiple control network prefixes can be specified in the session option,
separated by spaces. In this case, control network addresses are allocated
from the first prefix on the master server. The remaining network prefixes
are used for subsequent servers sorted by alphabetic host name. For example,
if the control network option is set to
"``172.16.1.0/24 172.16.2.0/24 192.168.0.0/16``" and the servers *core1*,
*core2*, and *server1* are involved, the control network bridges will be
assigned as follows: *core1* = ``172.16.1.254`` (assuming it is the master
server), *core2* = ``172.16.2.254``, and *server1* = ``192.168.255.254``.
Tunnels back to the master server will still be built, but it is up to the
user to add appropriate routes if networking between control network
prefixes is desired. The control network script may help with this.
Control Network Script
^^^^^^^^^^^^^^^^^^^^^^
.. index:: control network scripts
.. index:: controlnet_updown_script
A control network script may be specified using the ``controlnet_updown_script``
option in the :file:`/etc/core/core.conf` file. This script will be run after
the bridge has been built (and address assigned) with the first argument
the name of the bridge, and the second argument the keyword "``startup``".
The script will again be invoked prior to bridge removal with the second
argument being the keyword "``shutdown``".
Other Methods
^^^^^^^^^^^^^
@ -1402,13 +1358,18 @@ menu. Servers parameters are configured in the list below and stored in a
the server must be specified. The name of each server will be saved in the
topology file as each node's location.
.. NOTE::
The server that the GUI connects with
is referred to as the master server.
The user needs to assign nodes to emulation servers in the scenario. Making no
assignment means the node will be emulated locally, on the same machine that
the GUI is running. In the configuration window of every node, a drop-down box
located between the *Node name* and the *Image* button will select the name of
the emulation server. By default, this menu shows *(none)*, indicating that the
node will be emulated locally. When entering Execute mode, the CORE GUI will
deploy the node on its assigned emulation server.
assignment means the node will be emulated on the master server
In the configuration window of every node, a drop-down box located between
the *Node name* and the *Image* button will select the name of the emulation
server. By default, this menu shows *(none)*, indicating that the node will
be emulated locally on the master. When entering Execute mode, the CORE GUI
will deploy the node on its assigned emulation server.
Another way to assign emulation servers is to select one or more nodes using
the select tool (shift-click to select multiple), and right-click one of the
@ -1421,6 +1382,10 @@ the *all nodes* button. Servers that have assigned nodes are shown in blue in
the server list. Another option is to first select a subset of nodes, then open
the *CORE emulation servers* box and use the *selected nodes* button.
.. IMPORTANT::
Leave the nodes unassigned if they are to be run on the master server.
Do not explicitly assign the nodes to the master server.
The emulation server machines should be reachable on the specified port and via
SSH. SSH is used when double-clicking a node to open a shell, the GUI will open
an SSH prompt to that node's emulation server. Public-key authentication should
@ -1658,7 +1623,9 @@ Check Emulation Light
.. index:: CEL
The Check Emulation Light, or CEL, is located in the bottom right-hand corner
.. |cel| image:: figures/cel.*
The |cel| Check Emulation Light, or CEL, is located in the bottom right-hand corner
of the status bar in the CORE GUI. This is a yellow icon that indicates one or
more problems with the running emulation. Clicking on the CEL will invoke the
CEL dialog.
@ -1682,6 +1649,14 @@ would appear for a failed validation command with the UserDefined service.
Buttons are available at the bottom of the dialog for clearing the exception
list and for viewing the CORE daemon and node log files.
.. index:: batch mode, CEL
.. index:: CEL batch mode
.. NOTE::
In batch mode, exceptions received from the CORE daemon are displayed on
the console.
.. _Configuration_Files:
Configuration Files
@ -1698,25 +1673,32 @@ Any time you edit the topology
file, you will need to stop the emulation if it were running and reload the
file.
The :file:`.xml` file schema
is `specified by NRL <http://www.nrl.navy.mil/itd/ncs/products/mnmtools>`_.
Planning documents are specified in NRL's Network Modeling Framework (NMF).
Here the individual planning documents are several tags
encased in one `<Scenario>` tag:
The :file:`.xml` `file schema is specified by NRL <http://www.nrl.navy.mil/itd/ncs/products/mnmtools>`_ and there are two versions to date:
version 0.0 and version 1.0,
with 1.0 as the current default. CORE can open either XML version. However, the
xmlfilever line in :file:`/etc/core/core.conf` controls the version of the XML file
that CORE will create.
* `<NetworkPlan>` - describes nodes, hosts, interfaces, and the networks to
.. index:: Scenario Plan XML
In version 1.0, the XML file is also referred to as the Scenario Plan. The Scenario Plan will be logically
made up of the following:
* `Network Plan` - describes nodes, hosts, interfaces, and the networks to
which they belong.
* `<MotionPlan>` - describes position and motion patterns for nodes in an
* `Motion Plan` - describes position and motion patterns for nodes in an
emulation.
* `<ServicePlan>` - describes services (protocols, applications) and traffic
* `Services Plan` - describes services (protocols, applications) and traffic
flows that are associated with certain nodes.
* `<CoreMetaData>` - meta-data that is not part of the NRL XML schema but
* `Visualization Plan` - meta-data that is not part of the NRL XML schema but
used only by CORE. For example, GUI options, canvas and annotation info, etc.
are contained here.
* `Test Bed Mappings` - describes mappings of nodes, interfaces and EMANE modules in the scenario to
test bed hardware.
CORE includes Test Bed Mappings in XML files that are saved while the scenario is running.
.. index:: indentation
The :file:`.imn` file format comes from :ref:`IMUNES <Prior_Work>`, and is
basically Tcl lists of nodes, links, etc.
Tabs and spacing in the topology files are important. The file starts by
@ -1801,3 +1783,5 @@ The *Preferences* Dialog can be accessed from the :ref:`Edit_Menu`. There are
numerous defaults that can be set with this dialog, which are stored in the
:file:`~/.core/prefs.conf` preferences file.