+
diff --git a/docs/exampleservice.html b/docs/exampleservice.html
new file mode 100644
index 00000000..cddb18d4
--- /dev/null
+++ b/docs/exampleservice.html
@@ -0,0 +1,344 @@
+
+
+
+
+
+
+
+
+
+
+
Sample user-defined service.
+
+
+
from core.service import CoreService
+from core.service import ServiceMode
+
+
+
+
+
+
+
Custom CORE Service
+
+
+
class MyService(CoreService):
+
+
+
+
+
+
+
+
+
Name used as a unique ID for this service and is required, no spaces.
+
+
+
+
+
+
+
+
Allows you to group services within the GUI under a common name.
+
+
+
+
+
+
+
+
Executables this service depends on to function, if executable is not on the path, service will not be loaded.
+
+
+
+
+
+
+
+
Services that this service depends on for startup, tuple of service names.
+
+
+
+
+
+
+
+
Directories that this service will create within a node.
+
+
+
+
+
+
+
+
Files that this service will generate, without a full path this file goes in the node’s directory.
+e.g. /tmp/pycore.12345/n1.conf/myfile
+
+
+
configs = ("myservice1.sh", "myservice2.sh")
+
+
+
+
+
+
+
Commands used to start this service, any non-zero exit code will cause a failure.
+
+
+
startup = ("sh %s" % configs[0], "sh %s" % configs[1])
+
+
+
+
+
+
+
Commands used to validate that a service was started, any non-zero exit code will cause a failure.
+
+
+
+
+
+
+
+
Validation mode, used to determine startup success.
+
+- NON_BLOCKING - runs startup commands, and validates success with validation commands
+- BLOCKING - runs startup commands, and validates success with the startup commands themselves
+- TIMER - runs startup commands, and validates success by waiting for “validation_timer” alone
+
+
+
+
validation_mode = ServiceMode.NON_BLOCKING
+
+
+
+
+
+
+
Time in seconds for a service to wait for validation, before determining success in TIMER/NON_BLOCKING modes.
+
+
+
+
+
+
+
+
Period in seconds to wait before retrying validation, only used in NON_BLOCKING mode.
+
+
+
+
+
+
+
+
Shutdown commands to stop this service.
+
+
+
+
+
+
+
+
@classmethod
+ def on_load(cls):
+
+
+
+
+
+
+
Provides a way to run some arbitrary logic when the service is loaded, possibly to help facilitate
+dynamic settings for the environment.
+
+
+
+
+
+
+
+
@classmethod
+ def get_configs(cls, node):
+
+
+
+
+
+
+
Provides a way to dynamically generate the config files from the node a service will run.
+Defaults to the class definition and can be left out entirely if not needed.
+
+
+
+
+
+
+
+
@classmethod
+ def generate_config(cls, node, filename):
+
+
+
+
+
+
+
Returns a string representation for a file, given the node the service is starting on the config filename
+that this information will be used for. This must be defined, if “configs” are defined.
+
+
+
cfg = "#!/bin/sh\n"
+
+ if filename == cls.configs[0]:
+ cfg += "# auto-generated by MyService (sample.py)\n"
+ for ifc in node.netifs():
+ cfg += 'echo "Node %s has interface %s"\n' % (node.name, ifc.name)
+ elif filename == cls.configs[1]:
+ cfg += "echo hello"
+
+ return cfg
+
+
+
+
+
+
+
@classmethod
+ def get_startup(cls, node):
+
+
+
+
+
+
+
Provides a way to dynamically generate the startup commands from the node a service will run.
+Defaults to the class definition and can be left out entirely if not needed.
+
+
+
+
+
+
+
+
@classmethod
+ def get_validate(cls, node):
+
+
+
+
+
+
+
Provides a way to dynamically generate the validate commands from the node a service will run.
+Defaults to the class definition and can be left out entirely if not needed.
+
+
+
+
+
+
diff --git a/docs/grpc.md b/docs/grpc.md
deleted file mode 100644
index 3266a57d..00000000
--- a/docs/grpc.md
+++ /dev/null
@@ -1,411 +0,0 @@
-* Table of Contents
-
-## Overview
-
-[gRPC](https://grpc.io/) is a client/server API for interfacing with CORE
-and used by the python GUI for driving all functionality. It is dependent
-on having a running `core-daemon` instance to be leveraged.
-
-A python client can be created from the raw generated grpc files included
-with CORE or one can leverage a provided gRPC client that helps encapsulate
-some functionality to try and help make things easier.
-
-## Python Client
-
-A python client wrapper is provided at
-[CoreGrpcClient](https://github.com/coreemu/core/blob/master/daemon/core/api/grpc/client.py)
-to help provide some conveniences when using the API.
-
-### Client HTTP Proxy
-
-Since gRPC is HTTP2 based, proxy configurations can cause issues. By default,
-the client disables proxy support to avoid issues when a proxy is present.
-You can enable and properly account for this issue when needed.
-
-## Proto Files
-
-Proto files are used to define the API and protobuf messages that are used for
-interfaces with this API.
-
-They can be found
-[here](https://github.com/coreemu/core/tree/master/daemon/proto/core/api/grpc)
-to see the specifics of
-what is going on and response message values that would be returned.
-
-## Examples
-
-### Node Models
-
-When creating nodes of type `NodeType.DEFAULT` these are the default models
-and the services they map to.
-
-* mdr
- * zebra, OSPFv3MDR, IPForward
-* PC
- * DefaultRoute
-* router
- * zebra, OSPFv2, OSPFv3, IPForward
-* host
- * DefaultRoute, SSH
-
-### Interface Helper
-
-There is an interface helper class that can be leveraged for convenience
-when creating interface data for nodes. Alternatively one can manually create
-a `core.api.grpc.wrappers.Interface` class instead with appropriate information.
-
-Manually creating gRPC client interface:
-
-```python
-from core.api.grpc.wrappers import Interface
-
-# id is optional and will set to the next available id
-# name is optional and will default to eth
-# mac is optional and will result in a randomly generated mac
-iface = Interface(
- id=0,
- name="eth0",
- ip4="10.0.0.1",
- ip4_mask=24,
- ip6="2001::",
- ip6_mask=64,
-)
-```
-
-Leveraging the interface helper class:
-
-```python
-from core.api.grpc import client
-
-iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
-# node_id is used to get an ip4/ip6 address indexed from within the above prefixes
-# iface_id is required and used exactly for that
-# name is optional and would default to eth
-# mac is optional and will result in a randomly generated mac
-iface_data = iface_helper.create_iface(
- node_id=1, iface_id=0, name="eth0", mac="00:00:00:00:aa:00"
-)
-```
-
-### Listening to Events
-
-Various events that can occur within a session can be listened to.
-
-Event types:
-
-* session - events for changes in session state and mobility start/stop/pause
-* node - events for node movements and icon changes
-* link - events for link configuration changes and wireless link add/delete
-* config - configuration events when legacy gui joins a session
-* exception - alert/error events
-* file - file events when the legacy gui joins a session
-
-```python
-from core.api.grpc import client
-from core.api.grpc.wrappers import EventType
-
-
-def event_listener(event):
- print(event)
-
-
-# create grpc client and connect
-core = client.CoreGrpcClient()
-core.connect()
-
-# add session
-session = core.create_session()
-
-# provide no events to listen to all events
-core.events(session.id, event_listener)
-
-# provide events to listen to specific events
-core.events(session.id, event_listener, [EventType.NODE])
-```
-
-### Configuring Links
-
-Links can be configured at the time of creation or during runtime.
-
-Currently supported configuration options:
-
-* bandwidth (bps)
-* delay (us)
-* duplicate (%)
-* jitter (us)
-* loss (%)
-
-```python
-from core.api.grpc import client
-from core.api.grpc.wrappers import LinkOptions, Position
-
-# interface helper
-iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
-
-# create grpc client and connect
-core = client.CoreGrpcClient()
-core.connect()
-
-# add session
-session = core.create_session()
-
-# create nodes
-position = Position(x=100, y=100)
-node1 = session.add_node(1, position=position)
-position = Position(x=300, y=100)
-node2 = session.add_node(2, position=position)
-
-# configuring when creating a link
-options = LinkOptions(
- bandwidth=54_000_000,
- delay=5000,
- dup=5,
- loss=5.5,
- jitter=0,
-)
-iface1 = iface_helper.create_iface(node1.id, 0)
-iface2 = iface_helper.create_iface(node2.id, 0)
-link = session.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2)
-
-# configuring during runtime
-link.options.loss = 10.0
-core.edit_link(session.id, link)
-```
-
-### Peer to Peer Example
-
-```python
-# required imports
-from core.api.grpc import client
-from core.api.grpc.wrappers import Position
-
-# interface helper
-iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
-
-# create grpc client and connect
-core = client.CoreGrpcClient()
-core.connect()
-
-# add session
-session = core.create_session()
-
-# create nodes
-position = Position(x=100, y=100)
-node1 = session.add_node(1, position=position)
-position = Position(x=300, y=100)
-node2 = session.add_node(2, position=position)
-
-# create link
-iface1 = iface_helper.create_iface(node1.id, 0)
-iface2 = iface_helper.create_iface(node2.id, 0)
-session.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2)
-
-# start session
-core.start_session(session)
-```
-
-### Switch/Hub Example
-
-```python
-# required imports
-from core.api.grpc import client
-from core.api.grpc.wrappers import NodeType, Position
-
-# interface helper
-iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
-
-# create grpc client and connect
-core = client.CoreGrpcClient()
-core.connect()
-
-# add session
-session = core.create_session()
-
-# create nodes
-position = Position(x=200, y=200)
-switch = session.add_node(1, _type=NodeType.SWITCH, position=position)
-position = Position(x=100, y=100)
-node1 = session.add_node(2, position=position)
-position = Position(x=300, y=100)
-node2 = session.add_node(3, position=position)
-
-# create links
-iface1 = iface_helper.create_iface(node1.id, 0)
-session.add_link(node1=node1, node2=switch, iface1=iface1)
-iface1 = iface_helper.create_iface(node2.id, 0)
-session.add_link(node1=node2, node2=switch, iface1=iface1)
-
-# start session
-core.start_session(session)
-```
-
-### WLAN Example
-
-```python
-# required imports
-from core.api.grpc import client
-from core.api.grpc.wrappers import NodeType, Position
-
-# interface helper
-iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
-
-# create grpc client and connect
-core = client.CoreGrpcClient()
-core.connect()
-
-# add session
-session = core.create_session()
-
-# create nodes
-position = Position(x=200, y=200)
-wlan = session.add_node(1, _type=NodeType.WIRELESS_LAN, position=position)
-position = Position(x=100, y=100)
-node1 = session.add_node(2, model="mdr", position=position)
-position = Position(x=300, y=100)
-node2 = session.add_node(3, model="mdr", position=position)
-
-# create links
-iface1 = iface_helper.create_iface(node1.id, 0)
-session.add_link(node1=node1, node2=wlan, iface1=iface1)
-iface1 = iface_helper.create_iface(node2.id, 0)
-session.add_link(node1=node2, node2=wlan, iface1=iface1)
-
-# set wlan config using a dict mapping currently
-# support values as strings
-wlan.set_wlan(
- {
- "range": "280",
- "bandwidth": "55000000",
- "delay": "6000",
- "jitter": "5",
- "error": "5",
- }
-)
-
-# start session
-core.start_session(session)
-```
-
-### EMANE Example
-
-For EMANE you can import and use one of the existing models and
-use its name for configuration.
-
-Current models:
-
-* core.emane.ieee80211abg.EmaneIeee80211abgModel
-* core.emane.rfpipe.EmaneRfPipeModel
-* core.emane.tdma.EmaneTdmaModel
-* core.emane.bypass.EmaneBypassModel
-
-Their configurations options are driven dynamically from parsed EMANE manifest files
-from the installed version of EMANE.
-
-Options and their purpose can be found at the [EMANE Wiki](https://github.com/adjacentlink/emane/wiki).
-
-If configuring EMANE global settings or model mac/phy specific settings, any value not provided
-will use the defaults. When no configuration is used, the defaults are used.
-
-```python
-# required imports
-from core.api.grpc import client
-from core.api.grpc.wrappers import NodeType, Position
-from core.emane.models.ieee80211abg import EmaneIeee80211abgModel
-
-# interface helper
-iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
-
-# create grpc client and connect
-core = client.CoreGrpcClient()
-core.connect()
-
-# add session
-session = core.create_session()
-
-# create nodes
-position = Position(x=200, y=200)
-emane = session.add_node(
- 1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name
-)
-position = Position(x=100, y=100)
-node1 = session.add_node(2, model="mdr", position=position)
-position = Position(x=300, y=100)
-node2 = session.add_node(3, model="mdr", position=position)
-
-# create links
-iface1 = iface_helper.create_iface(node1.id, 0)
-session.add_link(node1=node1, node2=emane, iface1=iface1)
-iface1 = iface_helper.create_iface(node2.id, 0)
-session.add_link(node1=node2, node2=emane, iface1=iface1)
-
-# setting emane specific emane model configuration
-emane.set_emane_model(EmaneIeee80211abgModel.name, {
- "eventservicettl": "2",
- "unicastrate": "3",
-})
-
-# start session
-core.start_session(session)
-```
-
-EMANE Model Configuration:
-
-```python
-# emane network specific config, set on an emane node
-# this setting applies to all nodes connected
-emane.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"})
-
-# node specific config for an individual node connected to an emane network
-node.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"})
-
-# node interface specific config for an individual node connected to an emane network
-node.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"}, iface_id=0)
-```
-
-## Configuring a Service
-
-Services help generate and run bash scripts on nodes for a given purpose.
-
-Configuring the files of a service results in a specific hard coded script being
-generated, instead of the default scripts, that may leverage dynamic generation.
-
-The following features can be configured for a service:
-
-* files - files that will be generated
-* directories - directories that will be mounted unique to the node
-* startup - commands to run start a service
-* validate - commands to run to validate a service
-* shutdown - commands to run to stop a service
-
-Editing service properties:
-
-```python
-# configure a service, for a node, for a given session
-node.service_configs[service_name] = NodeServiceData(
- configs=["file1.sh", "file2.sh"],
- directories=["/etc/node"],
- startup=["bash file1.sh"],
- validate=[],
- shutdown=[],
-)
-```
-
-When editing a service file, it must be the name of `config`
-file that the service will generate.
-
-Editing a service file:
-
-```python
-# to edit the contents of a generated file you can specify
-# the service, the file name, and its contents
-file_configs = node.service_file_configs.setdefault(service_name, {})
-file_configs[file_name] = "echo hello world"
-```
-
-## File Examples
-
-File versions of the network examples can be found
-[here](https://github.com/coreemu/core/tree/master/package/examples/grpc).
-These examples will create a session using the gRPC API when the core-daemon is running.
-
-You can then switch to and attach to these sessions using either of the CORE GUIs.
diff --git a/docs/gui.md b/docs/gui.md
deleted file mode 100644
index c296ac18..00000000
--- a/docs/gui.md
+++ /dev/null
@@ -1,497 +0,0 @@
-# CORE GUI
-
-
-
-## Overview
-
-The GUI is used to draw nodes and network devices on a canvas, linking them
-together to create an emulated network session.
-
-After pressing the start button, CORE will proceed through these phases,
-staying in the **runtime** phase. After the session is stopped, CORE will
-proceed to the **data collection** phase before tearing down the emulated
-state.
-
-CORE can be customized to perform any action at each state. See the
-**Hooks...** entry on the [Session Menu](#session-menu) for details about
-when these session states are reached.
-
-## Prerequisites
-
-Beyond installing CORE, you must have the CORE daemon running. This is done
-on the command line with either systemd or sysv.
-
-```shell
-# systemd service
-sudo systemctl daemon-reload
-sudo systemctl start core-daemon
-
-# direct invocation
-sudo core-daemon
-```
-
-## GUI Files
-
-The GUI will create a directory in your home directory on first run called
-~/.coregui. This directory will help layout various files that the GUI may use.
-
-* .coregui/
- * backgrounds/
- * place backgrounds used for display in the GUI
- * custom_emane/
- * place to keep custom emane models to use with the core-daemon
- * custom_services/
- * place to keep custom services to use with the core-daemon
- * icons/
- * icons the GUI uses along with customs icons desired
- * mobility/
- * place to keep custom mobility files
- * scripts/
- * place to keep core related scripts
- * xmls/
- * place to keep saved session xml files
- * gui.log
- * log file when running the gui, look here when issues occur for exceptions etc
- * config.yaml
- * configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc)
-
-## Modes of Operation
-
-The CORE GUI has two primary modes of operation, **Edit** and **Execute**
-modes. Running the GUI, by typing **core-gui** with no options, starts in
-Edit mode. Nodes are drawn on a blank canvas using the toolbar on the left
-and configured from right-click menus or by double-clicking them. The GUI
-does not need to be run as root.
-
-Once editing is complete, pressing the green **Start** button instantiates
-the topology and enters Execute mode. In execute mode,
-the user can interact with the running emulated machines by double-clicking or
-right-clicking on them. The editing toolbar disappears and is replaced by an
-execute toolbar, which provides tools while running the emulation. Pressing
-the red **Stop** button will destroy the running emulation and return CORE
-to Edit mode.
-
-Once the emulation is running, the GUI can be closed, and a prompt will appear
-asking if the emulation should be terminated. The emulation may be left
-running and the GUI can reconnect to an existing session at a later time.
-
-The GUI can be run as a normal user on Linux.
-
-The GUI currently provides the following options on startup.
-
-```shell
-usage: core-gui [-h] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-p]
- [-s SESSION] [--create-dir]
-
-CORE Python GUI
-
-optional arguments:
- -h, --help show this help message and exit
- -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
- logging level
- -p, --proxy enable proxy
- -s SESSION, --session SESSION
- session id to join
- --create-dir create gui directory and exit
-```
-
-## Toolbar
-
-The toolbar is a row of buttons that runs vertically along the left side of the
-CORE GUI window. The toolbar changes depending on the mode of operation.
-
-### Editing Toolbar
-
-When CORE is in Edit mode (the default), the vertical Editing Toolbar exists on
-the left side of the CORE window. Below are brief descriptions for each toolbar
-item, starting from the top. Most of the tools are grouped into related
-sub-menus, which appear when you click on their group icon.
-
-| Icon | Name | Description |
-|----------------------------|----------------|----------------------------------------------------------------------------------------|
-|  | Selection Tool | Tool for selecting, moving, configuring nodes. |
-|  | Start Button | Starts Execute mode, instantiates the emulation. |
-|  | Link | Allows network links to be drawn between two nodes by clicking and dragging the mouse. |
-
-### CORE Nodes
-
-These nodes will create a new node container and run associated services.
-
-| Icon | Name | Description |
-|----------------------------|---------|------------------------------------------------------------------------------|
-|  | Router | Runs Quagga OSPFv2 and OSPFv3 routing to forward packets. |
-|  | Host | Emulated server machine having a default route, runs SSH server. |
-|  | PC | Basic emulated machine having a default route, runs no processes by default. |
-|  | MDR | Runs Quagga OSPFv3 MDR routing for MANET-optimized routing. |
-|  | PRouter | Physical router represents a real testbed machine. |
-
-### Network Nodes
-
-These nodes are mostly used to create a Linux bridge that serves the
-purpose described below.
-
-| Icon | Name | Description |
-|-------------------------------|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-|  | Hub | Ethernet hub forwards incoming packets to every connected node. |
-|  | Switch | Ethernet switch intelligently forwards incoming packets to attached hosts using an Ethernet address hash table. |
-|  | Wireless LAN | When routers are connected to this WLAN node, they join a wireless network and an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between attached wireless nodes based on the distance between them. |
-|  | RJ45 | RJ45 Physical Interface Tool, emulated nodes can be linked to real physical interfaces; using this tool, real networks and devices can be physically connected to the live-running emulation. |
-|  | Tunnel | Tool allows connecting together more than one CORE emulation using GRE tunnels. |
-
-### Annotation Tools
-
-| Icon | Name | Description |
-|-------------------------------|-----------|---------------------------------------------------------------------|
-|  | Marker | For drawing marks on the canvas. |
-|  | Oval | For drawing circles on the canvas that appear in the background. |
-|  | Rectangle | For drawing rectangles on the canvas that appear in the background. |
-|  | Text | For placing text captions on the canvas. |
-
-### Execution Toolbar
-
-When the Start button is pressed, CORE switches to Execute mode, and the Edit
-toolbar on the left of the CORE window is replaced with the Execution toolbar
-Below are the items on this toolbar, starting from the top.
-
-| Icon | Name | Description |
-|----------------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-|  | Stop Button | Stops Execute mode, terminates the emulation, returns CORE to edit mode. |
-|  | Selection Tool | In Execute mode, the Selection Tool can be used for moving nodes around the canvas, and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up menu of run-time options for that node. |
-|  | Marker | For drawing freehand lines on the canvas, useful during demonstrations; markings are not saved. |
-|  | Run Tool | This tool allows easily running a command on all or a subset of all nodes. A list box allows selecting any of the nodes. A text entry box allows entering any command. The command should return immediately, otherwise the display will block awaiting response. The *ping* command, for example, with no parameters, is not a good idea. The result of each command is displayed in a results box. The first occurrence of the special text "NODE" will be replaced with the node name. The command will not be attempted to run on nodes that are not routers, PCs, or hosts, even if they are selected. |
-
-## Menu
-
-The menubar runs along the top of the CORE GUI window and provides access to a
-variety of features. Some of the menus are detachable, such as the *Widgets*
-menu, by clicking the dashed line at the top.
-
-### File Menu
-
-The File menu contains options for saving and opening saved sessions.
-
-| Option | Description |
-|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| New Session | This starts a new session with an empty canvas. |
-| Save | Saves the current topology. If you have not yet specified a file name, the Save As dialog box is invoked. |
-| Save As | Invokes the Save As dialog box for selecting a new **.xml** file for saving the current configuration in the XML file. |
-| Open | Invokes the File Open dialog box for selecting a new XML file to open. |
-| Recently used files | Above the Quit menu command is a list of recently use files, if any have been opened. You can clear this list in the Preferences dialog box. You can specify the number of files to keep in this list from the Preferences dialog. Click on one of the file names listed to open that configuration file. |
-| Execute Python Script | Invokes a File Open dialog box for selecting a Python script to run and automatically connect to. After a selection is made, a Python Script Options dialog box is invoked to allow for command-line options to be added. The Python script must create a new CORE Session and add this session to the daemon's list of sessions in order for this to work. |
-| Quit | The Quit command should be used to exit the CORE GUI. CORE may prompt for termination if you are currently in Execute mode. Preferences and the recently-used files list are saved. |
-
-### Edit Menu
-
-| Option | Description |
-|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Preferences | Invokes the Preferences dialog box. |
-| Custom Nodes | Custom node creation dialog box. |
-| Undo | (Disabled) Attempts to undo the last edit in edit mode. |
-| Redo | (Disabled) Attempts to redo an edit that has been undone. |
-| Cut, Copy, Paste, Delete | Used to cut, copy, paste, and delete a selection. When nodes are pasted, their node numbers are automatically incremented, and existing links are preserved with new IP addresses assigned. Services and their customizations are copied to the new node, but care should be taken as node IP addresses have changed with possibly old addresses remaining in any custom service configurations. Annotations may also be copied and pasted. |
-
-### Canvas Menu
-
-The canvas menu provides commands related to the editing canvas.
-
-| Option | Description |
-|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Size/scale | Invokes a Canvas Size and Scale dialog that allows configuring the canvas size, scale, and geographic reference point. The size controls allow changing the width and height of the current canvas, in pixels or meters. The scale allows specifying how many meters are equivalent to 100 pixels. The reference point controls specify the latitude, longitude, and altitude reference point used to convert between geographic and Cartesian coordinate systems. By clicking the *Save as default* option, all new canvases will be created with these properties. The default canvas size can also be changed in the Preferences dialog box. |
-| Wallpaper | Used for setting the canvas background image. |
-
-### View Menu
-
-The View menu features items for toggling on and off their display on the canvas.
-
-| Option | Description |
-|-----------------|-----------------------------------|
-| Interface Names | Display interface names on links. |
-| IPv4 Addresses | Display IPv4 addresses on links. |
-| IPv6 Addresses | Display IPv6 addresses on links. |
-| Node Labels | Display node names. |
-| Link Labels | Display link labels. |
-| Annotations | Display annotations. |
-| Canvas Grid | Display the canvas grid. |
-
-### Tools Menu
-
-The tools menu lists different utility functions.
-
-| Option | Description |
-|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Find | Display find dialog used for highlighting a node on the canvas. |
-| Auto Grid | Automatically layout nodes in a grid. |
-| IP addresses | Invokes the IP Addresses dialog box for configuring which IPv4/IPv6 prefixes are used when automatically addressing new interfaces. |
-| MAC addresses | Invokes the MAC Addresses dialog box for configuring the starting number used as the lowest byte when generating each interface MAC address. This value should be changed when tunneling between CORE emulations to prevent MAC address conflicts. |
-
-### Widgets Menu
-
-Widgets are GUI elements that allow interaction with a running emulation.
-Widgets typically automate the running of commands on emulated nodes to report
-status information of some type and display this on screen.
-
-#### Periodic Widgets
-
-These Widgets are those available from the main *Widgets* menu. More than one
-of these Widgets may be run concurrently. An event loop fires once every second
-that the emulation is running. If one of these Widgets is enabled, its periodic
-routine will be invoked at this time. Each Widget may have a configuration
-dialog box which is also accessible from the *Widgets* menu.
-
-Here are some standard widgets:
-
-* **Adjacency** - displays router adjacency states for Quagga's OSPFv2 and OSPFv3
- routing protocols. A line is drawn from each router halfway to the router ID
- of an adjacent router. The color of the line is based on the OSPF adjacency
- state such as Two-way or Full. To learn about the different colors, see the
- *Configure Adjacency...* menu item. The **vtysh** command is used to
- dump OSPF neighbor information.
- Only half of the line is drawn because each
- router may be in a different adjacency state with respect to the other.
-* **Throughput** - displays the kilobits-per-second throughput above each link,
- using statistics gathered from each link. If the throughput exceeds a certain
- threshold, the link will become highlighted. For wireless nodes which broadcast
- data to all nodes in range, the throughput rate is displayed next to the node and
- the node will become circled if the threshold is exceeded.
-
-#### Observer Widgets
-
-These Widgets are available from the **Observer Widgets** submenu of the
-**Widgets** menu, and from the Widgets Tool on the toolbar. Only one Observer Widget may
-be used at a time. Mouse over a node while the session is running to pop up
-an informational display about that node.
-
-Available Observer Widgets include IPv4 and IPv6 routing tables, socket
-information, list of running processes, and OSPFv2/v3 neighbor information.
-
-Observer Widgets may be edited by the user and rearranged. Choosing
-**Widgets->Observer Widgets->Edit Observers** from the Observer Widget menu will
-invoke the Observer Widgets dialog. A list of Observer Widgets is displayed along
-with up and down arrows for rearranging the list. Controls are available for
-renaming each widget, for changing the command that is run during mouse over, and
-for adding and deleting items from the list. Note that specified commands should
-return immediately to avoid delays in the GUI display. Changes are saved to a
-**config.yaml** file in the CORE configuration directory.
-
-### Session Menu
-
-The Session Menu has entries for starting, stopping, and managing sessions,
-in addition to global options such as node types, comments, hooks, servers,
-and options.
-
-| Option | Description |
-|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Sessions | Invokes the CORE Sessions dialog box containing a list of active CORE sessions in the daemon. Basic session information such as name, node count, start time, and a thumbnail are displayed. This dialog allows connecting to different sessions, shutting them down, or starting a new session. |
-| Servers | Invokes the CORE emulation servers dialog for configuring. |
-| Options | Presents per-session options, such as the IPv4 prefix to be used, if any, for a control network the ability to preserve the session directory; and an on/off switch for SDT3D support. |
-| Hooks | Invokes the CORE Session Hooks window where scripts may be configured for a particular session state. The session states are defined in the [table](#session-states) below. The top of the window has a list of configured hooks, and buttons on the bottom left allow adding, editing, and removing hook scripts. The new or edit button will open a hook script editing window. A hook script is a shell script invoked on the host (not within a virtual node). |
-
-#### Session States
-
-| State | Description |
-|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Definition | Used by the GUI to tell the backend to clear any state. |
-| Configuration | When the user presses the *Start* button, node, link, and other configuration data is sent to the backend. This state is also reached when the user customizes a service. |
-| Instantiation | After configuration data has been sent, just before the nodes are created. |
-| Runtime | All nodes and networks have been built and are running. (This is the same state at which the previously-named *global experiment script* was run.) |
-| Datacollect | The user has pressed the *Stop* button, but before services have been stopped and nodes have been shut down. This is a good time to collect log files and other data from the nodes. |
-| Shutdown | All nodes and networks have been shut down and destroyed. |
-
-### Help Menu
-
-| Option | Description |
-|--------------------------|---------------------------------------------------------------|
-| CORE Github (www) | Link to the CORE GitHub page. |
-| CORE Documentation (www) | Lnk to the CORE Documentation page. |
-| About | Invokes the About dialog box for viewing version information. |
-
-## Building Sample Networks
-
-### Wired Networks
-
-Wired networks are created using the **Link Tool** to draw a link between two
-nodes. This automatically draws a red line representing an Ethernet link and
-creates new interfaces on network-layer nodes.
-
-Double-click on the link to invoke the **link configuration** dialog box. Here
-you can change the Bandwidth, Delay, Loss, and Duplicate
-rate parameters for that link. You can also modify the color and width of the
-link, affecting its display.
-
-Link-layer nodes are provided for modeling wired networks. These do not create
-a separate network stack when instantiated, but are implemented using Linux bridging.
-These are the hub, switch, and wireless LAN nodes. The hub copies each packet from
-the incoming link to every connected link, while the switch behaves more like an
-Ethernet switch and keeps track of the Ethernet address of the connected peer,
-forwarding unicast traffic only to the appropriate ports.
-
-The wireless LAN (WLAN) is covered in the next section.
-
-### Wireless Networks
-
-Wireless networks allow moving nodes around to impact the connectivity between them. Connections between a
-pair of nodes is stronger when the nodes are closer while connection is weaker when the nodes are further away.
-CORE offers several levels of wireless emulation fidelity, depending on modeling needs and available
-hardware.
-
-* WLAN Node
- * uses set bandwidth, delay, and loss
- * links are enabled or disabled based on a set range
- * uses the least CPU when moving, but nothing extra when not moving
-* Wireless Node
- * uses set bandwidth, delay, and initial loss
- * loss dynamically changes based on distance between nodes, which can be configured with range parameters
- * links are enabled or disabled based on a set range
- * uses more CPU to calculate loss for every movement, but nothing extra when not moving
-* EMANE Node
- * uses a physical layer model to account for signal propagation, antenna profile effects and interference
- sources in order to provide a realistic environment for wireless experimentation
- * uses the most CPU for every packet, as complex calculations are used for fidelity
- * See [Wiki](https://github.com/adjacentlink/emane/wiki) for details on general EMANE usage
- * See [CORE EMANE](emane.md) for details on using EMANE in CORE
-
-| Model | Type | Supported Platform(s) | Fidelity | Description |
-|----------|--------|-----------------------|----------|-------------------------------------------------------------------------------|
-| WLAN | On/Off | Linux | Low | Ethernet bridging with nftables |
-| Wireless | On/Off | Linux | Medium | Ethernet bridging with nftables |
-| EMANE | RF | Linux | High | TAP device connected to EMANE emulator with pluggable MAC and PHY radio types |
-
-#### Example WLAN Network Setup
-
-To quickly build a wireless network, you can first place several router nodes
-onto the canvas. If you have the
-Quagga MDR software installed, it is
-recommended that you use the **mdr** node type for reduced routing overhead. Next
-choose the **WLAN** from the **Link-layer nodes** submenu. First set the
-desired WLAN parameters by double-clicking the cloud icon. Then you can link
-all selected right-clicking on the WLAN and choosing **Link to Selected**.
-
-Linking a router to the WLAN causes a small antenna to appear, but no red link
-line is drawn. Routers can have multiple wireless links and both wireless and
-wired links (however, you will need to manually configure route
-redistribution.) The mdr node type will generate a routing configuration that
-enables OSPFv3 with MANET extensions. This is a Boeing-developed extension to
-Quagga's OSPFv3 that reduces flooding overhead and optimizes the flooding
-procedure for mobile ad-hoc (MANET) networks.
-
-The default configuration of the WLAN is set to use the basic range model. Having this model
-selected causes **core-daemon** to calculate the distance between nodes based
-on screen pixels. A numeric range in screen pixels is set for the wireless
-network using the **Range** slider. When two wireless nodes are within range of
-each other, a green line is drawn between them and they are linked. Two
-wireless nodes that are farther than the range pixels apart are not linked.
-During Execute mode, users may move wireless nodes around by clicking and
-dragging them, and wireless links will be dynamically made or broken.
-
-### Running Commands within Nodes
-
-You can double click a node to bring up a terminal for running shell commands. Within
-the terminal you can run anything you like and those commands will be run in context of the node.
-For standard CORE nodes, the only thing to keep in mind is that you are using the host file
-system and anything you change or do can impact the greater system. By default, your terminal
-will open within the nodes home directory for the running session, but it is temporary and
-will be removed when the session is stopped.
-
-You can also launch GUI based applications from within standard CORE nodes, but you need to
-enable xhost access to root.
-
-```shell
-xhost +local:root
-```
-
-### Mobility Scripting
-
-CORE has a few ways to script mobility.
-
-| Option | Description |
-|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| ns-2 script | The script specifies either absolute positions or waypoints with a velocity. Locations are given with Cartesian coordinates. |
-| gRPC API | An external entity can move nodes by leveraging the gRPC API |
-| EMANE events | See [EMANE](emane.md) for details on using EMANE scripts to move nodes around. Location information is typically given as latitude, longitude, and altitude. |
-
-For the first method, you can create a mobility script using a text
-editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate
-the script with one of the wireless
-using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
-button, and set the *mobility script file* field in the resulting *ns2script*
-configuration dialog.
-
-Here is an example for creating a BonnMotion script for 10 nodes:
-
-```shell
-bm -f sample RandomWaypoint -n 10 -d 60 -x 1000 -y 750
-bm NSFile -f sample
-# use the resulting 'sample.ns_movements' file in CORE
-```
-
-When the Execute mode is started and one of the WLAN nodes has a mobility
-script, a mobility script window will appear. This window contains controls for
-starting, stopping, and resetting the running time for the mobility script. The
-**loop** checkbox causes the script to play continuously. The **resolution** text
-box contains the number of milliseconds between each timer event; lower values
-cause the mobility to appear smoother but consumes greater CPU time.
-
-The format of an ns-2 mobility script looks like:
-
-```shell
-# nodes: 3, max time: 35.000000, max x: 600.00, max y: 600.00
-$node_(2) set X_ 144.0
-$node_(2) set Y_ 240.0
-$node_(2) set Z_ 0.00
-$ns_ at 1.00 "$node_(2) setdest 130.0 280.0 15.0"
-```
-
-The first three lines set an initial position for node 2. The last line in the
-above example causes node 2 to move towards the destination **(130, 280)** at
-speed **15**. All units are screen coordinates, with speed in units per second.
-The total script time is learned after all nodes have reached their waypoints.
-Initially, the time slider in the mobility script dialog will not be
-accurate.
-
-Examples mobility scripts (and their associated topology files) can be found
-in the **configs/** directory.
-
-## Alerts
-
-The alerts button is located in the bottom right-hand corner
-of the status bar in the CORE GUI. This will change colors to indicate one or
-more problems with the running emulation. Clicking on the alerts button will invoke the
-alerts dialog.
-
-The alerts dialog contains a list of alerts received from
-the CORE daemon. An alert has a time, severity level, optional node number,
-and source. When the alerts button is red, this indicates one or more fatal
-exceptions. An alert with a fatal severity level indicates that one or more
-of the basic pieces of emulation could not be created, such as failure to
-create a bridge or namespace, or the failure to launch EMANE processes for an
-EMANE-based network.
-
-Clicking on an alert displays details for that
-exceptio. The exception source is a text string
-to help trace where the exception occurred; "service:UserDefined" for example,
-would appear for a failed validation command with the UserDefined service.
-
-A button is available at the bottom of the dialog for clearing the exception
-list.
-
-## Customizing your Topology's Look
-
-Several annotation tools are provided for changing the way your topology is
-presented. Captions may be added with the Text tool. Ovals and rectangles may
-be drawn in the background, helpful for visually grouping nodes together.
-
-During live demonstrations the marker tool may be helpful for drawing temporary
-annotations on the canvas that may be quickly erased. A size and color palette
-appears at the bottom of the toolbar when the marker tool is selected. Markings
-are only temporary and are not saved in the topology file.
-
-The basic node icons can be replaced with a custom image of your choice. Icons
-appear best when they use the GIF or PNG format with a transparent background.
-To change a node's icon, double-click the node to invoke its configuration
-dialog and click on the button to the right of the node name that shows the
-node's current icon.
-
-A background image for the canvas may be set using the *Wallpaper...* option
-from the *Canvas* menu. The image may be centered, tiled, or scaled to fit the
-canvas size. An existing terrain, map, or network diagram could be used as a
-background, for example, with CORE nodes drawn on top.
diff --git a/docs/hitl.md b/docs/hitl.md
deleted file mode 100644
index b659a36f..00000000
--- a/docs/hitl.md
+++ /dev/null
@@ -1,127 +0,0 @@
-# Hardware In The Loop
-
-## Overview
-
-In some cases it may be impossible or impractical to run software using CORE
-nodes alone. You may need to bring in external hardware into the network.
-CORE's emulated networks run in real time, so they can be connected to live
-physical networks. The RJ45 tool and the Tunnel tool help with connecting to
-the real world. These tools are available from the **Link Layer Nodes** menu.
-
-When connecting two or more CORE emulations together, MAC address collisions
-should be avoided. CORE automatically assigns MAC addresses to interfaces when
-the emulation is started, starting with **00:00:00:aa:00:00** and incrementing
-the bottom byte. The starting byte should be changed on the second CORE machine
-using the **Tools->MAC Addresses** option the menu.
-
-## RJ45 Node
-
-CORE provides the RJ45 node, which represents a physical
-interface within the host that is running CORE. Any real-world network
-devices can be connected to the interface and communicate with the CORE nodes in real time.
-
-The main drawback is that one physical interface is required for each
-connection. When the physical interface is assigned to CORE, it may not be used
-for anything else. Another consideration is that the computer or network that
-you are connecting to must be co-located with the CORE machine.
-
-### GUI Usage
-
-To place an RJ45 connection, click on the **Link Layer Nodes** toolbar and select
-the **RJ45 Node** from the options. Click on the canvas, where you would like
-the nodes to place. Now click on the **Link Tool** and draw a link between the RJ45
-and the other node you wish to be connected to. The RJ45 node will display "UNASSIGNED".
-Double-click the RJ45 node to assign a physical interface. A list of available
-interfaces will be shown, and one may be selected, then selecting **Apply**.
-
-!!! note
-
- When you press the Start button to instantiate your topology, the
- interface assigned to the RJ45 will be connected to the CORE topology. The
- interface can no longer be used by the system.
-
-### Multiple RJ45s with One Interface (VLAN)
-
-It is possible to have multiple RJ45 nodes using the same physical interface
-by leveraging 802.1x VLANs. This allows for more RJ45 nodes than physical ports
-are available, but the (e.g. switching) hardware connected to the physical port
-must support the VLAN tagging, and the available bandwidth will be shared.
-
-You need to create separate VLAN virtual devices on the Linux host,
-and then assign these devices to RJ45 nodes inside of CORE. The VLANing is
-actually performed outside of CORE, so when the CORE emulated node receives a
-packet, the VLAN tag will already be removed.
-
-Here are example commands for creating VLAN devices under Linux:
-
-```shell
-ip link add link eth0 name eth0.1 type vlan id 1
-ip link add link eth0 name eth0.2 type vlan id 2
-ip link add link eth0 name eth0.3 type vlan id 3
-```
-
-## Tunnel Tool
-
-The tunnel tool builds GRE tunnels between CORE emulations or other hosts.
-Tunneling can be helpful when the number of physical interfaces is limited or
-when the peer is located on a different network. In this case a physical interface does
-not need to be dedicated to CORE as with the RJ45 tool.
-
-The peer GRE tunnel endpoint may be another CORE machine or another
-host that supports GRE tunneling. When placing a Tunnel node, initially
-the node will display "UNASSIGNED". This text should be replaced with the IP
-address of the tunnel peer. This is the IP address of the other CORE machine or
-physical machine, not an IP address of another virtual node.
-
-!!! note
-
- Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices.
- The *gretap* device has an interface MTU of 1,458 bytes; when joined to a Linux
- bridge, the bridge's MTU becomes 1,458 bytes. The Linux bridge will not perform
- fragmentation for large packets if other bridge ports have a higher MTU such
- as 1,500 bytes.
-
-The GRE key is used to identify flows with GRE tunneling. This allows multiple
-GRE tunnels to exist between that same pair of tunnel peers. A unique number
-should be used when multiple tunnels are used with the same peer. When
-configuring the peer side of the tunnel, ensure that the matching keys are
-used.
-
-### Example Usage
-
-Here are example commands for building the other end of a tunnel on a Linux
-machine. In this example, a router in CORE has the virtual address
-**10.0.0.1/24** and the CORE host machine has the (real) address
-**198.51.100.34/24**. The Linux box
-that will connect with the CORE machine is reachable over the (real) network
-at **198.51.100.76/24**.
-The emulated router is linked with the Tunnel Node. In the
-Tunnel Node configuration dialog, the address **198.51.100.76** is entered, with
-the key set to **1**. The gretap interface on the Linux box will be assigned
-an address from the subnet of the virtual router node,
-**10.0.0.2/24**.
-
-```shell
-# these commands are run on the tunnel peer
-sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1
-sudo ip addr add 10.0.0.2/24 dev gt0
-sudo ip link set dev gt0 up
-```
-
-Now the virtual router should be able to ping the Linux machine:
-
-```shell
-# from the CORE router node
-ping 10.0.0.2
-```
-
-And the Linux machine should be able to ping inside the CORE emulation:
-
-```shell
-# from the tunnel peer
-ping 10.0.0.1
-```
-
-To debug this configuration, **tcpdump** can be run on the gretap devices, or
-on the physical interfaces on the CORE or Linux machines. Make sure that a
-firewall is not blocking the GRE traffic.
diff --git a/docs/index.md b/docs/index.md
index 4afec59f..f516b648 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -2,17 +2,34 @@
## Introduction
-CORE (Common Open Research Emulator) is a tool for building virtual networks. As an emulator, CORE builds a
-representation of a real computer network that runs in real time, as opposed to simulation, where abstract models are
-used. The live-running emulation can be connected to physical networks and routers. It provides an environment for
-running real applications and protocols, taking advantage of tools provided by the Linux operating system.
+CORE (Common Open Research Emulator) is a tool for building virtual networks. As an emulator, CORE builds a representation of a real computer network that runs in real time, as opposed to simulation, where abstract models are used. The live-running emulation can be connected to physical networks and routers. It provides an environment for running real applications and protocols, taking advantage of virtualization provided by the Linux operating system.
-CORE is typically used for network and protocol research, demonstrations, application and platform testing, evaluating
-networking scenarios, security studies, and increasing the size of physical test networks.
+CORE is typically used for network and protocol research, demonstrations, application and platform testing, evaluating networking scenarios, security studies, and increasing the size of physical test networks.
### Key Features
-
* Efficient and scalable
* Runs applications and protocols without modification
* Drag and drop GUI
* Highly customizable
+
+## Topics
+
+* [Architecture](architecture.md)
+* [Installation](install.md)
+* [Usage](usage.md)
+* [Python Scripting](scripting.md)
+* [Node Types](machine.md)
+* [CTRLNET](ctrlnet.md)
+* [Services](services.md)
+* [EMANE](emane.md)
+* [NS3](ns3.md)
+* [Performance](performance.md)
+* [Developers Guide](devguide.md)
+
+## Credits
+
+The CORE project was derived from the open source IMUNES project from the University of Zagreb in 2004. In 2006, changes for CORE were released back to that project, some items of which were adopted. Marko Zec is the primary developer from the University of Zagreb responsible for the IMUNES (GUI) and VirtNet (kernel) projects. Ana Kukec and Miljenko Mikuc are known contributors.
+
+Jeff Ahrenholz has been the primary Boeing developer of CORE, and has written this manual. Tom Goff designed the Python framework and has made significant contributions. Claudiu Danilov, Rod Santiago, Kevin Larson, Gary Pei, Phil Spagnolo, and Ian Chakeres have contributed code to CORE. Dan Mackley helped develop the CORE API, originally to interface with a simulator. Jae Kim and Tom Henderson have supervised the project and provided direction.
+
+Copyright (c) 2005-2018, the Boeing Company.
diff --git a/docs/install.md b/docs/install.md
index 51c05dbc..fb161f78 100644
--- a/docs/install.md
+++ b/docs/install.md
@@ -1,407 +1,314 @@
-# Installation
-!!! warning
+# CORE Installation
- If Docker is installed, the default iptable rules will block CORE traffic
+* Table of Contents
+{:toc}
-## Overview
+# Overview
-CORE currently supports and provides the following installation options, with the package
-option being preferred.
+This section will describe how to set up a CORE machine. Note that the easiest way to install CORE is using a binary package on Ubuntu or Fedora/CentOS (deb or rpm) using the distribution's package manager to automatically install dependencies.
-* [Package based install (rpm/deb)](#package-based-install)
-* [Script based install](#script-based-install)
-* [Dockerfile based install](#dockerfile-based-install)
+Ubuntu and Fedora/CentOS Linux are the recommended distributions for running CORE. However, these distributions are not strictly required. CORE will likely work on other flavors of Linux as well.
-### Requirements
+The primary dependencies are Tcl/Tk (8.5 or newer) for the GUI, and Python 2.7 for the CORE daemon.
-Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous
-containers, as a general rule you should select a machine having as much RAM and CPU resources as possible.
+CORE files are installed to the following directories, when the installation prefix is */usr*.
-* Linux Kernel v3.3+
-* iproute2 4.5+ is a requirement for bridge related commands
-* nftables compatible kernel and nft command line tool
+Install Path | Description
+-------------|------------
+/usr/bin/core-gui|GUI startup command
+/usr/bin/core-daemon|Daemon startup command
+/usr/bin/|Misc. helper commands/scripts
+/usr/lib/core|GUI files
+/usr/lib/python2.7/dist-packages/core|Python modules for daemon/scripts
+/etc/core/|Daemon configuration files
+~/.core/|User-specific GUI preferences and scenario files
+/usr/share/core/|Example scripts and scenarios
+/usr/share/man/man1/|Command man pages
+/etc/init.d/core-daemon|SysV startup script for daemon
+/etc/systemd/system/core-daemon.service|Systemd startup script for daemon
-### Supported Linux Distributions
+## Prerequisites
-Plan is to support recent Ubuntu and CentOS LTS releases.
+A Linux operating system is required. The GUI uses the Tcl/Tk scripting toolkit, and the CORE daemon requires Python. Details of the individual software packages required can be found in the installation steps.
-Verified:
+## Required Hardware
-* Ubuntu - 18.04, 20.04, 22.04
-* CentOS - 7.8
+Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous virtual machines, as a general rule you should select a machine having as much RAM and CPU resources as possible.
-### Files
+## Required Software
-The following is a list of files that would be installed after installation.
+CORE requires a Linux operating system because it uses virtualization provided by the kernel. It does not run on Windows or Mac OS X operating systems (unless it is running within a virtual machine guest.) The virtualization technology that CORE currently uses is Linux network namespaces.
-* executables
- * `/bin/{vcmd, vnode}`
- * can be adjusted using script based install , package will be /usr
-* python files
- * virtual environment `/opt/core/venv`
- * local install will be local to the python version used
- * `python3 -c "import core; print(core.__file__)"`
- * scripts {core-daemon, core-cleanup, etc}
- * virtualenv `/opt/core/venv/bin`
- * local `/usr/local/bin`
-* configuration files
- * `/etc/core/{core.conf, logging.conf}`
-* ospf mdr repository files when using script based install
- * `/../ospf-mdr`
+The CORE GUI requires the X.Org X Window system (X11), or can run over a remote X11 session. For specific Tcl/Tk, Python, and other libraries required to run CORE.
-### Installed Scripts
+**NOTE: CORE *Services* determine what run on each node. You may require other software packages depending on the services you wish to use. For example, the *HTTP* service will require the *apache2* package.**
-The following python scripts are provided.
+## Installing from Packages
-| Name | Description |
-|---------------------|------------------------------------------------------------------------------|
-| core-cleanup | tool to help removed lingering core created containers, bridges, directories |
-| core-cli | tool to query, open xml files, and send commands using gRPC |
-| core-daemon | runs the backed core server providing a gRPC API |
-| core-gui | starts GUI |
-| core-python | provides a convenience for running the core python virtual environment |
-| core-route-monitor | tool to help monitor traffic across nodes and feed that to SDT |
-| core-service-update | tool to update automate modifying a legacy service to match current naming |
+The easiest way to install CORE is using the pre-built packages. The package managers on Ubuntu or Fedora/CentOS will automatically install dependencies for you. You can obtain the CORE packages from [CORE GitHub](https://github.com/coreemu/core/releases).
-### Upgrading from Older Release
+### Installing from Packages on Ubuntu
-Please make sure to uninstall any previous installations of CORE cleanly
-before proceeding to install.
-
-Clearing out a current install from 7.0.0+, making sure to provide options
-used for install (`-l` or `-p`).
+Install Quagga for routing. If you plan on working with wireless networks, we recommend installing [OSPF MDR](http://www.nrl.navy.mil/itd/ncs/products/ospf-manet) (replace *amd64* below with *i386* if needed to match your architecture):
```shell
-cd
-inv uninstall
+wget https://downloads.pf.itd.nrl.navy.mil/ospf-manet/quagga-0.99.21mr2.2/quagga-mr_0.99.21mr2.2_amd64.deb
+sudo dpkg -i quagga-mr_0.99.21mr2.2_amd64.deb
```
-Previous install was built from source for CORE release older than 7.0.0:
+Or, for the regular Ubuntu version of Quagga:
```shell
-cd
-sudo make uninstall
-make clean
-./bootstrap.sh clean
+sudo apt-get install quagga
```
-Installed from previously built packages:
+Install the CORE deb packages for Ubuntu from command line.
```shell
-# centos
-sudo yum remove core
-# ubuntu
-sudo apt remove core
+sudo dpkg -i python-core_*.deb
+sudo dpkg -i core-gui_*.deb
```
-## Installation Examples
-
-The below links will take you to sections providing complete examples for installing
-CORE and related utilities on fresh installations. Otherwise, a breakdown for installing
-different components and the options available are detailed below.
-
-* [Ubuntu 22.04](install_ubuntu.md)
-* [CentOS 7](install_centos.md)
-
-## Package Based Install
-
-Starting with 9.0.0 there are pre-built rpm/deb packages. You can retrieve the
-rpm/deb package from [releases](https://github.com/coreemu/core/releases) page.
-
-The built packages will require and install system level dependencies, as well as running
-a post install script to install the provided CORE python wheel. A similar uninstall script
-is ran when uninstalling and would require the same options as given, during the install.
-
-!!! note
-
- PYTHON defaults to python3 for installs below, CORE requires python3.9+, pip,
- tk compatibility for python gui, and venv for virtual environments
-
-Examples for install:
+Start the CORE daemon as root, the systemd installation will auto start the daemon, but you can use the commands below if need be.
```shell
-# recommended to upgrade to the latest version of pip before installation
-# in python, can help avoid building from source issues
-sudo -m pip install --upgrade pip
-# install vcmd/vnoded, system dependencies,
-# and core python into a venv located at /opt/core/venv
-sudo install -y ./
-# disable the venv and install to python directly
-sudo NO_VENV=1 install -y ./
-# change python executable used to install for venv or direct installations
-sudo PYTHON=python3.9 install -y ./
-# disable venv and change python executable
-sudo NO_VENV=1 PYTHON=python3.9 install -y ./
-# skip installing the python portion entirely, as you plan to carry this out yourself
-# core python wheel is located at /opt/core/core--py3-none-any.whl
-sudo NO_PYTHON=1 install -y ./
-# install python wheel into python of your choosing
-sudo -m pip install /opt/core/core--py3-none-any.whl
+# systemd
+sudo systemctl start core-daemon
+
+# sysv
+sudo service core-daemon start
```
-Example for removal, requires using the same options as install:
+Run the CORE GUI as a normal user:
```shell
-# remove a standard install
-sudo remove core
-# remove a local install
-sudo NO_VENV=1 remove core
-# remove install using alternative python
-sudo PYTHON=python3.9 remove core
-# remove install using alternative python and local install
-sudo NO_VENV=1 PYTHON=python3.9 remove core
-# remove install and skip python uninstall
-sudo NO_PYTHON=1 remove core
-```
-
-### Installing OSPF MDR
-
-You will need to manually install OSPF MDR for routing nodes, since this is not
-provided by the package.
-
-```shell
-git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git
-cd ospf-mdr
-./bootstrap.sh
-./configure --disable-doc --enable-user=root --enable-group=root \
- --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \
- --localstatedir=/var/run/quagga
-make -j$(nproc)
-sudo make install
-```
-
-When done see [Post Install](#post-install).
-
-## Script Based Install
-
-The script based installation will install system level dependencies, python library and
-dependencies, as well as dependencies for building CORE.
-
-The script based install also automatically builds and installs OSPF MDR, used by default
-on routing nodes. This can optionally be skipped.
-
-Installaion will carry out the following steps:
-
-* installs system dependencies for building core
-* builds vcmd/vnoded and python grpc files
-* installs core into poetry managed virtual environment or locally, if flag is passed
-* installs systemd service pointing to appropriate python location based on install type
-* clone/build/install working version of [OPSF MDR](https://github.com/USNavalResearchLaboratory/ospf-mdr)
-
-!!! note
-
- Installing locally comes with its own risks, it can result it potential
- dependency conflicts with system package manager installed python dependencies
-
-!!! note
-
- Provide a prefix that will be found on path when running as sudo,
- if the default prefix /usr/local will not be valid
-
-The following tools will be leveraged during installation:
-
-| Tool | Description |
-|---------------------------------------------|-----------------------------------------------------------------------|
-| [pip](https://pip.pypa.io/en/stable/) | used to install pipx |
-| [pipx](https://pipxproject.github.io/pipx/) | used to install standalone python tools (invoke, poetry) |
-| [invoke](http://www.pyinvoke.org/) | used to run provided tasks (install, uninstall, reinstall, etc) |
-| [poetry](https://python-poetry.org/) | used to install python virtual environment or building a python wheel |
-
-First we will need to clone and navigate to the CORE repo.
-
-```shell
-# clone CORE repo
-git clone https://github.com/coreemu/core.git
-cd core
-
-# install dependencies to run installation task
-./setup.sh
-# skip installing system packages, due to using python built from source
-NO_SYSTEM=1 ./setup.sh
-
-# run the following or open a new terminal
-source ~/.bashrc
-
-# Ubuntu
-inv install
-# CentOS
-inv install -p /usr
-# optionally skip python system packages
-inv install --no-python
-# optionally skip installing ospf mdr
-inv install --no-ospf
-
-# install command options
-Usage: inv[oke] [--core-opts] install [--options] [other tasks here ...]
-
-Docstring:
- install core, poetry, scripts, service, and ospf mdr
-
-Options:
- -d, --dev install development mode
- -i STRING, --install-type=STRING used to force an install type, can be one of the following (redhat, debian)
- -l, --local determines if core will install to local system, default is False
- -n, --no-python avoid installing python system dependencies
- -o, --[no-]ospf disable ospf installation
- -p STRING, --prefix=STRING prefix where scripts are installed, default is /usr/local
- -v, --verbose
-```
-
-When done see [Post Install](#post-install).
-
-### Unsupported Linux Distribution
-
-For unsupported OSs you could attempt to do the following to translate
-an installation to your use case.
-
-* make sure you have python3.9+ with venv support
-* make sure you have python3 invoke available to leverage `/tasks.py`
-
-```shell
-# this will print the commands that would be ran for a given installation
-# type without actually running them, they may help in being used as
-# the basis for translating to your OS
-inv install --dry -v -p -i
-```
-
-## Dockerfile Based Install
-
-You can leverage one of the provided Dockerfiles, to run and launch CORE within a Docker container.
-
-Since CORE nodes will leverage software available within the system for a given use case,
-make sure to update and build the Dockerfile with desired software.
-
-```shell
-# clone core
-git clone https://github.com/coreemu/core.git
-cd core
-# build image
-sudo docker build -t core -f dockerfiles/Dockerfile. .
-# start container
-sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged core
-# enable xhost access to the root user
-xhost +local:root
-# launch core-gui
-sudo docker exec -it core core-gui
-```
-
-When done see [Post Install](#post-install).
-
-## Installing EMANE
-
-!!! note
-
- Installing EMANE for the virtual environment is known to work for 1.21+
-
-The recommended way to install EMANE is using prebuilt packages, otherwise
-you can follow their instructions for installing from source. Installation
-information can be found [here](https://github.com/adjacentlink/emane/wiki/Install).
-
-There is an invoke task to help install the EMANE bindings into the CORE virtual
-environment, when needed. An example for running the task is below and the version
-provided should match the version of the packages installed.
-
-You will also need to make sure, you are providing the correct python binary where CORE
-is being used.
-
-Also, these EMANE bindings need to be built using `protoc` 3.19+. So make sure
-that is available and being picked up on PATH properly.
-
-Examples for building and installing EMANE python bindings for use in CORE:
-
-```shell
-# if your system does not have protoc 3.19+
-wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
-mkdir protoc
-unzip protoc-3.19.6-linux-x86_64.zip -d protoc
-git clone https://github.com/adjacentlink/emane.git
-cd emane
-git checkout v1.3.3
-./autogen.sh
-PYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr
-cd src/python
-PATH=/opt/protoc/bin:$PATH make
-/opt/core/venv/bin/python -m pip install .
-
-# when your system has protoc 3.19+
-cd
-# example version tag v1.3.3
-# overriding python used to leverage the default virtualenv install
-PYTHON=/opt/core/venv/bin/python inv install-emane -e
-# local install that uses whatever python3 refers to
-inv install-emane -e
-```
-
-## Post Install
-
-After installation completes you are now ready to run CORE.
-
-### Resolving Docker Issues
-
-If you have Docker installed, by default it will change the iptables
-forwarding chain to drop packets, which will cause issues for CORE traffic.
-
-You can temporarily resolve the issue with the following command:
-
-```shell
-sudo iptables --policy FORWARD ACCEPT
-```
-
-Alternatively, you can configure Docker to avoid doing this, but will likely
-break normal Docker networking usage. Using the setting below will require
-a restart.
-
-Place the file contents below in **/etc/docker/docker.json**
-
-```json
-{
- "iptables": false
-}
-```
-
-### Resolving Path Issues
-
-One problem running CORE you may run into, using the virtual environment or locally
-can be issues related to your path.
-
-To add support for your user to run scripts from the virtual environment:
-
-```shell
-# can add to ~/.bashrc
-export PATH=$PATH:/opt/core/venv/bin
-```
-
-This will not solve the path issue when running as sudo, so you can do either
-of the following to compensate.
-
-```shell
-# run command passing in the right PATH to pickup from the user running the command
-sudo env PATH=$PATH core-daemon
-
-# add an alias to ~/.bashrc or something similar
-alias sudop='sudo env PATH=$PATH'
-# now you can run commands like so
-sudop core-daemon
-```
-
-### Running CORE
-
-The following assumes I have resolved PATH issues and setup the `sudop` alias.
-
-```shell
-# in one terminal run the server daemon using the alias above
-sudop core-daemon
-# in another terminal run the gui client
core-gui
```
-### Enabling Service
+After running the *core-gui* command, a GUI should appear with a canvas for drawing topologies. Messages will print out on the console about connecting to the CORE daemon.
-After installation, the core service is not enabled by default. If you desire to use the
-service, run the following commands.
+### Installing from Packages on Fedora/CentOS
+
+The commands shown here should be run as root. The *x86_64* architecture is shown in the examples below, replace with *i686* is using a 32-bit architecture.
+
+**CentOS 7 Only: in order to install *tkimg* package you must build from source.**
+
+Make sure the system is up to date.
```shell
-sudo systemctl enable core-daemon
-sudo systemctl start core-daemon
+yum update
```
+
+**Optional (Fedora 17+): Fedora 17 and newer have an additional prerequisite providing the required netem kernel modules (otherwise skip this step and have the package manager install it for you.)**
+
+```shell
+yum install kernel-modules-extra
+```
+
+Install Quagga for routing. If you plan on working with wireless networks, we recommend installing [OSPF MDR](http://www.nrl.navy.mil/itd/ncs/products/ospf-manet):
+
+```shell
+wget https://downloads.pf.itd.nrl.navy.mil/ospf-manet/quagga-0.99.21mr2.2/quagga-0.99.21mr2.2-1.el6.x86_64.rpm
+sudo yum install quagga-0.99.21mr2.2-1.el6.x86_64.rpm
+```
+
+Or, for the regular Fedora/CentOS version of Quagga:
+
+```shell
+yum install quagga
+```
+
+Install the CORE RPM packages and automatically resolve dependencies:
+
+```shell
+yum install python-core_*.rpm
+yum install core-gui_*.rpm
+```
+
+Turn off SELINUX by setting *SELINUX=disabled* in the */etc/sysconfig/selinux* file, and adding *selinux=0* to the kernel line in your */etc/grub.conf* file; on Fedora 15 and newer, disable sandboxd using ```chkconfig sandbox off```; you need to reboot in order for this change to take effect
+
+Turn off firewalls:
+
+```shell
+systemctl disable firewalld
+systemctl disable iptables.service
+systemctl disable ip6tables.service
+chkconfig iptables off
+chkconfig ip6tables off
+```
+
+You need to reboot after making these changes, or flush the firewall using
+
+```shell
+iptables -F
+ip6tables -F
+```
+
+Start the CORE daemon as root.
+
+```shell
+# systemd
+sudo systemctl daemon-reload
+sudo systemctl start core-daemon
+
+# sysv
+sudo service core-daemon start
+```
+
+Run the CORE GUI as a normal user:
+
+```shell
+core-gui
+```
+
+After running the *core-gui* command, a GUI should appear with a canvas for drawing topologies. Messages will print out on the console about connecting to the CORE daemon.
+
+### Installing from Source
+
+This option is listed here for developers and advanced users who are comfortable patching and building source code. Please consider using the binary packages instead for a simplified install experience.
+
+To build CORE from source on Ubuntu, first install these development packages. These packages are not required for normal binary package installs.
+
+#### Ubuntu 18.04 pre-reqs
+
+```shell
+sudo apt install automake pkg-config gcc libev-dev bridge-utils ebtables python-dev python-sphinx python-setuptools python-lxml python-enum34 tk libtk-img
+```
+
+#### Ubuntu 16.04 Requirements
+
+```shell
+sudo apt-get install automake bridge-utils ebtables python-dev libev-dev python-sphinx python-setuptools python-enum34 python-lxml libtk-img
+```
+
+
+#### CentOS 7 with Gnome Desktop Requirements
+
+```shell
+sudo yum -y install automake gcc python-devel libev-devel python-sphinx tk python-lxml python-enum34
+```
+
+You can obtain the CORE source from the [CORE GitHub](https://github.com/coreemu/core) page. Choose either a stable release version or the development snapshot available in the *nightly_snapshots* directory.
+
+```shell
+tar xzf core-*.tar.gz
+cd core-*
+```
+
+#### Tradional Autotools Build
+```shell
+./bootstrap.sh
+./configure
+make
+sudo make install
+```
+
+#### Build Documentation
+```shell
+./bootstrap.sh
+./configure
+make doc
+```
+
+#### Build Packages
+Install fpm: http://fpm.readthedocs.io/en/latest/installing.html
+Build package commands, DESTDIR is used for gui packaging only
+
+```shell
+./bootstrap.sh
+./configure
+make
+mkdir /tmp/core-gui
+make fpm DESTDIR=/tmp/core-gui
+
+```
+This will produce:
+
+* CORE GUI rpm/deb files
+ * core-gui_$VERSION_$ARCH
+* CORE ns3 rpm/deb files
+ * python-core-ns3_$VERSION_$ARCH
+* CORE python rpm/deb files for SysV and systemd service types
+ * python-core-sysv_$VERSION_$ARCH
+ * python-core-systemd_$VERSION_$ARCH
+
+
+### Quagga Routing Software
+
+Virtual networks generally require some form of routing in order to work (e.g. to automatically populate routing tables for routing packets from one subnet to another.) CORE builds OSPF routing protocol configurations by default when the blue router node type is used. The OSPF protocol is available from the [Quagga open source routing suit](http://www.quagga.net).
+
+Quagga is not specified as a dependency for the CORE packages because there are two different Quagga packages that you may use:
+
+* [Quagga](http://www.quagga.net) - the standard version of Quagga, suitable for static wired networks, and usually available via your distribution's package manager.
+
+* [OSPF MANET Designated Routers](http://www.nrl.navy.mil/itd/ncs/products/ospf-manet) (MDR) - the Quagga routing suite with a modified version of OSPFv3, optimized for use with mobile wireless networks. The *mdr* node type (and the MDR service) requires this variant of Quagga.
+
+If you plan on working with wireless networks, we recommend installing OSPF MDR; otherwise install the standard version of Quagga using your package manager or from source.
+
+#### Installing Quagga from Packages
+
+To install the standard version of Quagga from packages, use your package manager (Linux).
+
+Ubuntu users:
+
+```shell
+sudo apt-get install quagga
+```
+
+Fedora/CentOS users:
+
+```shell
+sudo yum install quagga
+```
+
+To install the Quagga variant having OSPFv3 MDR, first download the appropriate package, and install using the package manager.
+
+Ubuntu users:
+```shell
+wget https://downloads.pf.itd.nrl.navy.mil/ospf-manet/quagga-0.99.21mr2.2/quagga-mr_0.99.21mr2.2_amd64.deb
+sudo dpkg -i quagga-mr_0.99.21mr2.2_amd64.deb
+```
+
+Replace *amd64* with *i686* if using a 32-bit architecture.
+
+Fedora/CentOS users:
+
+```shell
+wget https://downloads.pf.itd.nrl.navy.mil/ospf-manet/quagga-0.99.21mr2.2/quagga-0.99.21mr2.2-1.el6.x86_64.rpm
+sudo yum install quagga-0.99.21mr2.2-1.el6.x86_64.rpm
+````
+
+Replace *x86_64* with *i686* if using a 32-bit architecture.
+
+#### Compiling Quagga for CORE
+
+To compile Quagga to work with CORE on Linux:
+
+```shell
+wget https://downloads.pf.itd.nrl.navy.mil/ospf-manet/quagga-0.99.21mr2.2/quagga-0.99.21mr2.2.tar.gz
+tar xzf quagga-0.99.21mr2.2.tar.gz
+cd quagga-0.99
+./configure --enable-user=root --enable-group=root --with-cflags=-ggdb \\
+ --sysconfdir=/usr/local/etc/quagga --enable-vtysh \\
+ --localstatedir=/var/run/quagga
+make
+sudo make install
+```
+
+Note that the configuration directory */usr/local/etc/quagga* shown for Quagga above could be */etc/quagga*, if you create a symbolic link from */etc/quagga/Quagga.conf -> /usr/local/etc/quagga/Quagga.conf* on the host. The *quaggaboot.sh* script in a Linux network namespace will try and do this for you if needed.
+
+If you try to run quagga after installing from source and get an error such as:
+
+```shell
+error while loading shared libraries libzebra.so.0
+```
+
+this is usually a sign that you have to run ```sudo ldconfig```` to refresh the cache file.
+
+### VCORE
+
+CORE is capable of running inside of a virtual machine, using software such as VirtualBox, VMware Server or QEMU. However, CORE itself is performing machine virtualization in order to realize multiple emulated nodes, and running CORE virtually adds additional contention for the physical resources. **For performance reasons, this is not recommended.** Timing inside of a VM often has problems. If you do run CORE from within a VM, it is recommended that you view the GUI with remote X11 over SSH, so the virtual machine does not need to emulate the video card with the X11 application.
+
+A CORE virtual machine is provided for download, named VCORE. This is the perhaps the easiest way to get CORE up and running as the machine is already set up for you. This may be adequate for initially evaluating the tool but keep in mind the performance limitations of running within VirtualBox or VMware. To install the virtual machine, you first need to obtain VirtualBox from http://www.virtualbox.org, or VMware Server or Player from http://www.vmware.com (this commercial software is distributed for free.) Once virtualization software has been installed, you can import the virtual machine appliance using the *vbox* file for VirtualBox or the *vmx* file for VMware. See the documentation that comes with VCORE for login information.
+
diff --git a/docs/install_centos.md b/docs/install_centos.md
deleted file mode 100644
index 53de2af6..00000000
--- a/docs/install_centos.md
+++ /dev/null
@@ -1,144 +0,0 @@
-# Install CentOS
-
-## Overview
-
-Below is a detailed path for installing CORE and related tooling on a fresh
-CentOS 7 install. Both of the examples below will install CORE into its
-own virtual environment located at **/opt/core/venv**. Both examples below
-also assume using **~/Documents** as the working directory.
-
-## Script Install
-
-This section covers step by step commands that can be used to install CORE using
-the script based installation path.
-
-``` shell
-# install system packages
-sudo yum -y update
-sudo yum install -y git sudo wget tzdata unzip libpcap-devel libpcre3-devel \
- libxml2-devel protobuf-devel unzip uuid-devel tcpdump make epel-release
-sudo yum-builddep -y python3
-
-# install python3.9
-cd ~/Documents
-wget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz
-tar xf Python-3.9.15.tgz
-cd Python-3.9.15
-./configure --enable-optimizations --with-ensurepip=install
-sudo make -j$(nproc) altinstall
-python3.9 -m pip install --upgrade pip
-
-# install core
-cd ~/Documents
-git clone https://github.com/coreemu/core
-cd core
-NO_SYSTEM=1 PYTHON=/usr/local/bin/python3.9 ./setup.sh
-source ~/.bashrc
-PYTHON=python3.9 inv install -p /usr --no-python
-
-# install emane
-cd ~/Documents
-wget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz
-tar xf emane-1.3.3-release-1.el7.x86_64.tar.gz
-cd emane-1.3.3-release-1/rpms/el7/x86_64
-sudo yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm
-
-# install emane python bindings into CORE virtual environment
-cd ~/Documents
-wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
-mkdir protoc
-unzip protoc-3.19.6-linux-x86_64.zip -d protoc
-git clone https://github.com/adjacentlink/emane.git
-cd emane
-git checkout v1.3.3
-./autogen.sh
-PYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr
-cd src/python
-PATH=~/Documents/protoc/bin:$PATH make
-sudo /opt/core/venv/bin/python -m pip install .
-```
-
-## Package Install
-
-This section covers step by step commands that can be used to install CORE using
-the package based installation path. This will require downloading a package from the release
-page, to use during the install CORE step below.
-
-``` shell
-# install system packages
-sudo yum -y update
-sudo yum install -y git sudo wget tzdata unzip libpcap-devel libpcre3-devel libxml2-devel \
- protobuf-devel unzip uuid-devel tcpdump automake gawk libreadline-devel libtool \
- pkg-config make
-sudo yum-builddep -y python3
-
-# install python3.9
-cd ~/Documents
-wget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz
-tar xf Python-3.9.15.tgz
-cd Python-3.9.15
-./configure --enable-optimizations --with-ensurepip=install
-sudo make -j$(nproc) altinstall
-python3.9 -m pip install --upgrade pip
-
-# install core
-cd ~/Documents
-sudo PYTHON=python3.9 yum install -y ./core_*.rpm
-
-# install ospf mdr
-cd ~/Documents
-git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git
-cd ospf-mdr
-./bootstrap.sh
-./configure --disable-doc --enable-user=root --enable-group=root \
- --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \
- --localstatedir=/var/run/quagga
-make -j$(nproc)
-sudo make install
-
-# install emane
-cd ~/Documents
-wget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz
-tar xf emane-1.3.3-release-1.el7.x86_64.tar.gz
-cd emane-1.3.3-release-1/rpms/el7/x86_64
-sudo yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm
-
-# install emane python bindings into CORE virtual environment
-cd ~/Documents
-wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
-mkdir protoc
-unzip protoc-3.19.6-linux-x86_64.zip -d protoc
-git clone https://github.com/adjacentlink/emane.git
-cd emane
-git checkout v1.3.3
-./autogen.sh
-PYTHON=/opt/core/venv/bin/python ./configure --prefix=/usr
-cd src/python
-PATH=~/Documents/protoc/bin:$PATH make
-sudo /opt/core/venv/bin/python -m pip install .
-```
-
-## Setup PATH
-
-The CORE virtual environment and related scripts will not be found on your PATH,
-so some adjustments needs to be made.
-
-To add support for your user to run scripts from the virtual environment:
-
-```shell
-# can add to ~/.bashrc
-export PATH=$PATH:/opt/core/venv/bin
-```
-
-This will not solve the path issue when running as sudo, so you can do either
-of the following to compensate.
-
-```shell
-# run command passing in the right PATH to pickup from the user running the command
-sudo env PATH=$PATH core-daemon
-
-# add an alias to ~/.bashrc or something similar
-alias sudop='sudo env PATH=$PATH'
-# now you can run commands like so
-sudop core-daemon
-```
diff --git a/docs/install_ubuntu.md b/docs/install_ubuntu.md
deleted file mode 100644
index 57274a4f..00000000
--- a/docs/install_ubuntu.md
+++ /dev/null
@@ -1,116 +0,0 @@
-# Install Ubuntu
-
-## Overview
-
-Below is a detailed path for installing CORE and related tooling on a fresh
-Ubuntu 22.04 installation. Both of the examples below will install CORE into its
-own virtual environment located at **/opt/core/venv**. Both examples below
-also assume using **~/Documents** as the working directory.
-
-## Script Install
-
-This section covers step by step commands that can be used to install CORE using
-the script based installation path.
-
-``` shell
-# install system packages
-sudo apt-get update -y
-sudo apt-get install -y ca-certificates git sudo wget tzdata libpcap-dev libpcre3-dev \
- libprotobuf-dev libxml2-dev protobuf-compiler unzip uuid-dev iproute2 iputils-ping \
- tcpdump
-
-# install core
-cd ~/Documents
-git clone https://github.com/coreemu/core
-cd core
-./setup.sh
-source ~/.bashrc
-inv install
-
-# install emane
-cd ~/Documents
-wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
-mkdir protoc
-unzip protoc-3.19.6-linux-x86_64.zip -d protoc
-git clone https://github.com/adjacentlink/emane.git
-cd emane
-./autogen.sh
-./configure --prefix=/usr
-make -j$(nproc)
-sudo make install
-cd src/python
-make clean
-PATH=~/Documents/protoc/bin:$PATH make
-sudo /opt/core/venv/bin/python -m pip install .
-```
-
-## Package Install
-
-This section covers step by step commands that can be used to install CORE using
-the package based installation path. This will require downloading a package from the release
-page, to use during the install CORE step below.
-
-``` shell
-# install system packages
-sudo apt-get update -y
-sudo apt-get install -y ca-certificates python3 python3-tk python3-pip python3-venv \
- libpcap-dev libpcre3-dev libprotobuf-dev libxml2-dev protobuf-compiler unzip \
- uuid-dev automake gawk git wget libreadline-dev libtool pkg-config g++ make \
- iputils-ping tcpdump
-
-# install core
-cd ~/Documents
-sudo apt-get install -y ./core_*.deb
-
-# install ospf mdr
-cd ~/Documents
-git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git
-cd ospf-mdr
-./bootstrap.sh
-./configure --disable-doc --enable-user=root --enable-group=root \
- --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \
- --localstatedir=/var/run/quagga
-make -j$(nproc)
-sudo make install
-
-# install emane
-cd ~/Documents
-wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
-mkdir protoc
-unzip protoc-3.19.6-linux-x86_64.zip -d protoc
-git clone https://github.com/adjacentlink/emane.git
-cd emane
-./autogen.sh
-./configure --prefix=/usr
-make -j$(nproc)
-sudo make install
-cd src/python
-make clean
-PATH=~/Documents/protoc/bin:$PATH make
-sudo /opt/core/venv/bin/python -m pip install .
-```
-
-## Setup PATH
-
-The CORE virtual environment and related scripts will not be found on your PATH,
-so some adjustments needs to be made.
-
-To add support for your user to run scripts from the virtual environment:
-
-```shell
-# can add to ~/.bashrc
-export PATH=$PATH:/opt/core/venv/bin
-```
-
-This will not solve the path issue when running as sudo, so you can do either
-of the following to compensate.
-
-```shell
-# run command passing in the right PATH to pickup from the user running the command
-sudo env PATH=$PATH core-daemon
-
-# add an alias to ~/.bashrc or something similar
-alias sudop='sudo env PATH=$PATH'
-# now you can run commands like so
-sudop core-daemon
-```
diff --git a/docs/lxc.md b/docs/lxc.md
deleted file mode 100644
index 1ee11453..00000000
--- a/docs/lxc.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# LXC Support
-
-## Overview
-
-LXC nodes are provided by way of LXD to create nodes using predefined
-images and provide file system separation.
-
-## Installation
-
-### Debian Systems
-
-```shell
-sudo snap install lxd
-```
-
-## Configuration
-
-Initialize LXD and say no to adding a default bridge.
-
-```shell
-sudo lxd init
-```
-
-## Group Setup
-
-To use LXC nodes within the python GUI, you will need to make sure the user running the GUI is a member of the
-lxd group.
-
-```shell
-# add group if does not exist
-sudo groupadd lxd
-
-# add user to group
-sudo usermod -aG lxd $USER
-
-# to get this change to take effect, log out and back in or run the following
-newgrp lxd
-```
-
-## Tools and Versions Tested With
-
-* LXD 3.14
-* nsenter from util-linux 2.31.1
diff --git a/docs/machine.md b/docs/machine.md
new file mode 100644
index 00000000..bd68d7e1
--- /dev/null
+++ b/docs/machine.md
@@ -0,0 +1,22 @@
+# CORE Node Types
+
+* Table of Contents
+{:toc}
+
+## Overview
+
+Different node types can be configured in CORE, and each node type has a *machine type* that indicates how the node will be represented at run time. Different machine types allow for different virtualization options.
+
+## netns nodes
+
+The *netns* machine type is the default. This is for nodes that will be backed by Linux network namespaces. See :ref:`Linux` for a brief explanation of netns. This default machine type is very lightweight, providing a minimum amount of virtualization in order to emulate a network. Another reason this is designated as the default machine type is because this virtualization technology typically requires no changes to the kernel; it is available out-of-the-box from the latest mainstream Linux distributions.
+
+## physical nodes
+
+The *physical* machine type is used for nodes that represent a real Linux-based machine that will participate in the emulated network scenario. This is typically used, for example, to incorporate racks of server machines from an emulation testbed. A physical node is one that is running the CORE daemon (*core-daemon*), but will not be further partitioned into virtual machines. Services that are run on the physical node do not run in an isolated or virtualized environment, but directly on the operating system.
+
+Physical nodes must be assigned to servers, the same way nodes are assigned to emulation servers with *Distributed Emulation*. The list of available physical nodes currently shares the same dialog box and list as the emulation servers, accessed using the *Emulation Servers...* entry from the *Session* menu.
+
+Support for physical nodes is under development and may be improved in future releases. Currently, when any node is linked to a physical node, a dashed line is drawn to indicate network tunneling. A GRE tunneling interface will be created on the physical node and used to tunnel traffic to and from the emulated world.
+
+Double-clicking on a physical node during runtime opens a terminal with an SSH shell to that node. Users should configure public-key SSH login as done with emulation servers.
diff --git a/docs/nodetypes.md b/docs/nodetypes.md
deleted file mode 100644
index 8f095746..00000000
--- a/docs/nodetypes.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# Node Types
-
-## Overview
-
-Different node types can be used within CORE, each with their own
-tradeoffs and functionality.
-
-## CORE Nodes
-
-CORE nodes are the standard node type typically used in CORE. They are
-backed by Linux network namespaces. They use very little system resources
-in order to emulate a network. They do however share the hosts file system
-as they do not get their own. CORE nodes will have a directory uniquely
-created for them as a place to keep their files and mounted directories
-(`/tmp/pycore./
+```
+
+The interactive Python shell allows some interaction with the Python objects for the emulation.
+
+In another terminal, nodes can be accessed using *vcmd*:
+
+```shell
+vcmd -c /tmp/pycore.10781/n1 -- bash
+root@n1:/tmp/pycore.10781/n1.conf#
+root@n1:/tmp/pycore.10781/n1.conf# ping 10.0.0.3
+PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=7.99 ms
+64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=3.73 ms
+64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=3.60 ms
+^C
+--- 10.0.0.3 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 2002ms
+rtt min/avg/max/mdev = 3.603/5.111/7.993/2.038 ms
+root@n1:/tmp/pycore.10781/n1.conf#
+```
+
+The ping packets shown above are traversing an ns-3 ad-hoc Wifi simulated network.
+
+To clean up the session, use the Session.shutdown() method from the Python terminal.
+
+```python
+print session
+
+session.shutdown()
+```
+
+A CORE/ns-3 Python script will instantiate an Ns3Session, which is a CORE Session having CoreNs3Nodes, an ns-3 MobilityHelper, and a fixed duration. The CoreNs3Node inherits from both the CoreNode and the ns-3 Node classes -- it is a network namespace having an associated simulator object. The CORE TunTap interface is used, represented by a ns-3 TapBridge in *CONFIGURE_LOCAL* mode, where ns-3 creates and configures the tap device. An event is scheduled to install the taps at time 0.
+
+**NOTE: The GUI can be used to run the *ns3wifi.py* and *ns3wifirandomwalk.py* scripts directly. First, *core-daemon* must be stopped and run within the waf root shell. Then the GUI may be run as a normal user, and the *Execute Python Script...* option may be used from the *File* menu. Dragging nodes around in the *ns3wifi.py* example will cause their ns-3 positions to be updated.**
+
+Users may find the files *ns3wimax.py* and *ns3lte.py* in that example directory; those files were similarly configured, but the underlying ns-3 support is not present as of ns-3.16, so they will not work. Specifically, the ns-3 has to be extended to support bridging the Tap device to an LTE and a WiMax device.
+
+## Integration details
+
+The previous example *ns3wifi.py* used Python API from the special Python objects *Ns3Session* and *Ns3WifiNet*. The example program does not import anything directly from the ns-3 python modules; rather, only the above two objects are used, and the API available to configure the underlying ns-3 objects is constrained. For example, *Ns3WifiNet* instantiates a constant-rate 802.11a-based ad hoc network, using a lot of ns-3 defaults.
+
+However, programs may be written with a blend of ns-3 API and CORE Python API calls. This section examines some of the fundamental objects in the CORE ns-3 support. Source code can be found in *ns3/corens3/obj.py* and example code in *ns3/corens3/examples/*.
+
+## Ns3Session
+
+The *Ns3Session* class is a CORE Session that starts an ns-3 simulation thread. ns-3 actually runs as a separate process on the same host as the CORE daemon, and the control of starting and stopping this process is performed by the *Ns3Session* class.
+
+Example:
+
+```python
+session = Ns3Session(persistent=True, duration=opt.duration)
+```
+
+Note the use of the duration attribute to control how long the ns-3 simulation should run. By default, the duration is 600 seconds.
+
+Typically, the session keeps track of the ns-3 nodes (holding a node container for references to the nodes). This is accomplished via the ```addnode()``` method, e.g.:
+
+```python
+for i in xrange(1, opt.numnodes + 1):
+ node = session.addnode(name = "n%d" % i)
+```
+
+```addnode()``` creates instances of a *CoreNs3Node*, which we'll cover next.
+
+## CoreNs3Node
+
+A *CoreNs3Node* is both a CoreNode and an ns-3 node:
+
+```python
+class CoreNs3Node(CoreNode, ns.network.Node):
+ """
+ The CoreNs3Node is both a CoreNode backed by a network namespace and
+ an ns-3 Node simulator object. When linked to simulated networks, the TunTap
+ device will be used.
+ """
+```
+
+## CoreNs3Net
+
+A *CoreNs3Net* derives from *PyCoreNet*. This network exists entirely in simulation, using the TunTap device to interact between the emulated and the simulated realm. *Ns3WifiNet* is a specialization of this.
+
+As an example, this type of code would be typically used to add a WiFi network to a session:
+
+```python
+wifi = session.addobj(cls=Ns3WifiNet, name="wlan1", rate="OfdmRate12Mbps")
+wifi.setposition(30, 30, 0)
+```
+
+The above two lines will create a wlan1 object and set its initial canvas position. Later in the code, the newnetif method of the CoreNs3Node can be used to add interfaces on particular nodes to this network; e.g.:
+
+```python
+for i in xrange(1, opt.numnodes + 1):
+ node = session.addnode(name = "n%d" % i)
+ node.newnetif(wifi, ["%s/%s" % (prefix.addr(i), prefix.prefixlen)])
+```
+
+## Mobility
+
+Mobility in ns-3 is handled by an object (a MobilityModel) aggregated to an ns-3 node. The MobilityModel is able to report the position of the object in the ns-3 space. This is a slightly different model from, for instance, EMANE, where location is associated with an interface, and the CORE GUI, where mobility is configured by right-clicking on a WiFi cloud.
+
+The CORE GUI supports the ability to render the underlying ns-3 mobility model, if one is configured, on the CORE canvas. For example, the example program :file:`ns3wifirandomwalk.py` uses five nodes (by default) in a random walk mobility model. This can be executed by starting the core daemon from an ns-3 waf shell:
+
+```shell
+sudo bash
+cd /path/to/ns-3
+./waf shell
+core-daemon
+```
+
+and in a separate window, starting the CORE GUI (not from a waf shell) and selecting the *Execute Python script...* option from the File menu, selecting the *ns3wifirandomwalk.py* script.
+
+The program invokes ns-3 mobility through the following statement:
+
+```python
+session.setuprandomwalkmobility(bounds=(1000.0, 750.0, 0))
+```
+
+This can be replaced by a different mode of mobility, in which nodes are placed according to a constant mobility model, and a special API call to the CoreNs3Net object is made to use the CORE canvas positions.
+
+```python
+session.setuprandomwalkmobility(bounds=(1000.0, 750.0, 0))
+session.setupconstantmobility()
+wifi.usecorepositions()
+```
+
+In this mode, the user dragging around the nodes on the canvas will cause CORE to update the position of the underlying ns-3 nodes.
diff --git a/docs/performance.md b/docs/performance.md
index 449e3837..b057dd23 100644
--- a/docs/performance.md
+++ b/docs/performance.md
@@ -1,44 +1,28 @@
# CORE Performance
+* Table of Contents
+{:toc}
+
## Overview
-The top question about the performance of CORE is often *how many nodes can it
-handle?* The answer depends on several factors:
+The top question about the performance of CORE is often *how many nodes can it handle?* The answer depends on several factors:
-| Factor | Performance Impact |
-|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Hardware | the number and speed of processors in the computer, the available processor cache, RAM memory, and front-side bus speed may greatly affect overall performance. |
-| Operating system version | distribution of Linux and the specific kernel versions used will affect overall performance. |
-| Active processes | all nodes share the same CPU resources, so if one or more nodes is performing a CPU-intensive task, overall performance will suffer. |
-| Network traffic | the more packets that are sent around the virtual network increases the amount of CPU usage. |
-| GUI usage | widgets that run periodically, mobility scenarios, and other GUI interactions generally consume CPU cycles that may be needed for emulation. |
+* Hardware - the number and speed of processors in the computer, the available processor cache, RAM memory, and front-side bus speed may greatly affect overall performance.
+* Operating system version - distribution of Linux and the specific kernel versions used will affect overall performance.
+* Active processes - all nodes share the same CPU resources, so if one or more nodes is performing a CPU-intensive task, overall performance will suffer.
+* Network traffic - the more packets that are sent around the virtual network increases the amount of CPU usage.
+* GUI usage - widgets that run periodically, mobility scenarios, and other GUI interactions generally consume CPU cycles that may be needed for emulation.
-On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux,
-we have found it reasonable to run 30-75 nodes running OSPFv2 and OSPFv3
-routing. On this hardware CORE can instantiate 100 or more nodes, but at
-that point it becomes critical as to what each of the nodes is doing.
+On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux, we have found it reasonable to run 30-75 nodes running OSPFv2 and OSPFv3 routing. On this hardware CORE can instantiate 100 or more nodes, but at that point it becomes critical as to what each of the nodes is doing.
-Because this software is primarily a network emulator, the more appropriate
-question is *how much network traffic can it handle?* On the same 3.0GHz
-server described above, running Linux, about 300,000 packets-per-second can
-be pushed through the system. The number of hops and the size of the packets
-is less important. The limiting factor is the number of times that the
-operating system needs to handle a packet. The 300,000 pps figure represents
-the number of times the system as a whole needed to deal with a packet. As
-more network hops are added, this increases the number of context switches
-and decreases the throughput seen on the full length of the network path.
+Because this software is primarily a network emulator, the more appropriate question is *how much network traffic can it handle?* On the same 3.0GHz server described above, running Linux, about 300,000 packets-per-second can be pushed through the system. The number of hops and the size of the packets is less important. The limiting factor is the number of times that the operating system needs to handle a packet. The 300,000 pps figure represents the number of times the system as a whole needed to deal with a packet. As more network hops are added, this increases the number of context switches and decreases the throughput seen on the full length of the network path.
-!!! note
+**NOTE: The right question to be asking is *"how much traffic?"*, not *"how many nodes?"*.**
- The right question to be asking is *"how much traffic?"*, not
- *"how many nodes?"*.
+For a more detailed study of performance in CORE, refer to the following publications:
-For a more detailed study of performance in CORE, refer to the following
-publications:
+* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
-* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE
- Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
-* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings
- of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
-* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time
- network emulator, Proceedings of IEEE MILCOM Conference, 2008.
+* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
+
+* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time network emulator, Proceedings of IEEE MILCOM Conference, 2008.
diff --git a/docs/pycco.css b/docs/pycco.css
new file mode 100644
index 00000000..aef571a5
--- /dev/null
+++ b/docs/pycco.css
@@ -0,0 +1,190 @@
+/*--------------------- Layout and Typography ----------------------------*/
+body {
+ font-family: 'Palatino Linotype', 'Book Antiqua', Palatino, FreeSerif, serif;
+ font-size: 16px;
+ line-height: 24px;
+ color: #252519;
+ margin: 0; padding: 0;
+ background: #f5f5ff;
+}
+a {
+ color: #261a3b;
+}
+ a:visited {
+ color: #261a3b;
+ }
+p {
+ margin: 0 0 15px 0;
+}
+h1, h2, h3, h4, h5, h6 {
+ margin: 40px 0 15px 0;
+}
+h2, h3, h4, h5, h6 {
+ margin-top: 0;
+ }
+#container {
+ background: white;
+ }
+#container, div.section {
+ position: relative;
+}
+#background {
+ position: absolute;
+ top: 0; left: 580px; right: 0; bottom: 0;
+ background: #f5f5ff;
+ border-left: 1px solid #e5e5ee;
+ z-index: 0;
+}
+#jump_to, #jump_page {
+ background: white;
+ -webkit-box-shadow: 0 0 25px #777; -moz-box-shadow: 0 0 25px #777;
+ -webkit-border-bottom-left-radius: 5px; -moz-border-radius-bottomleft: 5px;
+ font: 10px Arial;
+ text-transform: uppercase;
+ cursor: pointer;
+ text-align: right;
+}
+#jump_to, #jump_wrapper {
+ position: fixed;
+ right: 0; top: 0;
+ padding: 5px 10px;
+}
+ #jump_wrapper {
+ padding: 0;
+ display: none;
+ }
+ #jump_to:hover #jump_wrapper {
+ display: block;
+ }
+ #jump_page {
+ padding: 5px 0 3px;
+ margin: 0 0 25px 25px;
+ }
+ #jump_page .source {
+ display: block;
+ padding: 5px 10px;
+ text-decoration: none;
+ border-top: 1px solid #eee;
+ }
+ #jump_page .source:hover {
+ background: #f5f5ff;
+ }
+ #jump_page .source:first-child {
+ }
+div.docs {
+ float: left;
+ max-width: 500px;
+ min-width: 500px;
+ min-height: 5px;
+ padding: 10px 25px 1px 50px;
+ vertical-align: top;
+ text-align: left;
+}
+ .docs pre {
+ margin: 15px 0 15px;
+ padding-left: 15px;
+ }
+ .docs p tt, .docs p code {
+ background: #f8f8ff;
+ border: 1px solid #dedede;
+ font-size: 12px;
+ padding: 0 0.2em;
+ }
+ .octowrap {
+ position: relative;
+ }
+ .octothorpe {
+ font: 12px Arial;
+ text-decoration: none;
+ color: #454545;
+ position: absolute;
+ top: 3px; left: -20px;
+ padding: 1px 2px;
+ opacity: 0;
+ -webkit-transition: opacity 0.2s linear;
+ }
+ div.docs:hover .octothorpe {
+ opacity: 1;
+ }
+div.code {
+ margin-left: 580px;
+ padding: 14px 15px 16px 50px;
+ vertical-align: top;
+}
+ .code pre, .docs p code {
+ font-size: 12px;
+ }
+ pre, tt, code {
+ line-height: 18px;
+ font-family: Monaco, Consolas, "Lucida Console", monospace;
+ margin: 0; padding: 0;
+ }
+div.clearall {
+ clear: both;
+}
+
+
+/*---------------------- Syntax Highlighting -----------------------------*/
+td.linenos { background-color: #f0f0f0; padding-right: 10px; }
+span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
+body .hll { background-color: #ffffcc }
+body .c { color: #408080; font-style: italic } /* Comment */
+body .err { border: 1px solid #FF0000 } /* Error */
+body .k { color: #954121 } /* Keyword */
+body .o { color: #666666 } /* Operator */
+body .cm { color: #408080; font-style: italic } /* Comment.Multiline */
+body .cp { color: #BC7A00 } /* Comment.Preproc */
+body .c1 { color: #408080; font-style: italic } /* Comment.Single */
+body .cs { color: #408080; font-style: italic } /* Comment.Special */
+body .gd { color: #A00000 } /* Generic.Deleted */
+body .ge { font-style: italic } /* Generic.Emph */
+body .gr { color: #FF0000 } /* Generic.Error */
+body .gh { color: #000080; font-weight: bold } /* Generic.Heading */
+body .gi { color: #00A000 } /* Generic.Inserted */
+body .go { color: #808080 } /* Generic.Output */
+body .gp { color: #000080; font-weight: bold } /* Generic.Prompt */
+body .gs { font-weight: bold } /* Generic.Strong */
+body .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
+body .gt { color: #0040D0 } /* Generic.Traceback */
+body .kc { color: #954121 } /* Keyword.Constant */
+body .kd { color: #954121; font-weight: bold } /* Keyword.Declaration */
+body .kn { color: #954121; font-weight: bold } /* Keyword.Namespace */
+body .kp { color: #954121 } /* Keyword.Pseudo */
+body .kr { color: #954121; font-weight: bold } /* Keyword.Reserved */
+body .kt { color: #B00040 } /* Keyword.Type */
+body .m { color: #666666 } /* Literal.Number */
+body .s { color: #219161 } /* Literal.String */
+body .na { color: #7D9029 } /* Name.Attribute */
+body .nb { color: #954121 } /* Name.Builtin */
+body .nc { color: #0000FF; font-weight: bold } /* Name.Class */
+body .no { color: #880000 } /* Name.Constant */
+body .nd { color: #AA22FF } /* Name.Decorator */
+body .ni { color: #999999; font-weight: bold } /* Name.Entity */
+body .ne { color: #D2413A; font-weight: bold } /* Name.Exception */
+body .nf { color: #0000FF } /* Name.Function */
+body .nl { color: #A0A000 } /* Name.Label */
+body .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
+body .nt { color: #954121; font-weight: bold } /* Name.Tag */
+body .nv { color: #19469D } /* Name.Variable */
+body .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
+body .w { color: #bbbbbb } /* Text.Whitespace */
+body .mf { color: #666666 } /* Literal.Number.Float */
+body .mh { color: #666666 } /* Literal.Number.Hex */
+body .mi { color: #666666 } /* Literal.Number.Integer */
+body .mo { color: #666666 } /* Literal.Number.Oct */
+body .sb { color: #219161 } /* Literal.String.Backtick */
+body .sc { color: #219161 } /* Literal.String.Char */
+body .sd { color: #219161; font-style: italic } /* Literal.String.Doc */
+body .s2 { color: #219161 } /* Literal.String.Double */
+body .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */
+body .sh { color: #219161 } /* Literal.String.Heredoc */
+body .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */
+body .sx { color: #954121 } /* Literal.String.Other */
+body .sr { color: #BB6688 } /* Literal.String.Regex */
+body .s1 { color: #219161 } /* Literal.String.Single */
+body .ss { color: #19469D } /* Literal.String.Symbol */
+body .bp { color: #954121 } /* Name.Builtin.Pseudo */
+body .vc { color: #19469D } /* Name.Variable.Class */
+body .vg { color: #19469D } /* Name.Variable.Global */
+body .vi { color: #19469D } /* Name.Variable.Instance */
+body .il { color: #666666 } /* Literal.Number.Integer.Long */
diff --git a/docs/python.md b/docs/python.md
deleted file mode 100644
index 0985bb8d..00000000
--- a/docs/python.md
+++ /dev/null
@@ -1,437 +0,0 @@
-# Python API
-
-## Overview
-
-Writing your own Python scripts offers a rich programming environment with
-complete control over all aspects of the emulation.
-
-The scripts need to be ran with root privileges because they create new network
-namespaces. In general, a CORE Python script does not connect to the CORE
-daemon, in fact the *core-daemon* is just another Python script that uses
-the CORE Python modules and exchanges messages with the GUI.
-
-## Examples
-
-### Node Models
-
-When creating nodes of type `core.nodes.base.CoreNode` these are the default models
-and the services they map to.
-
-* mdr
- * zebra, OSPFv3MDR, IPForward
-* PC
- * DefaultRoute
-* router
- * zebra, OSPFv2, OSPFv3, IPForward
-* host
- * DefaultRoute, SSH
-
-### Interface Helper
-
-There is an interface helper class that can be leveraged for convenience
-when creating interface data for nodes. Alternatively one can manually create
-a `core.emulator.data.InterfaceData` class instead with appropriate information.
-
-Manually creating interface data:
-
-```python
-from core.emulator.data import InterfaceData
-
-# id is optional and will set to the next available id
-# name is optional and will default to eth
-# mac is optional and will result in a randomly generated mac
-iface_data = InterfaceData(
- id=0,
- name="eth0",
- ip4="10.0.0.1",
- ip4_mask=24,
- ip6="2001::",
- ip6_mask=64,
-)
-```
-
-Leveraging the interface prefixes helper class:
-
-```python
-from core.emulator.data import IpPrefixes
-
-ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
-# node is used to get an ip4/ip6 address indexed from within the above prefixes
-# name is optional and would default to eth
-# mac is optional and will result in a randomly generated mac
-iface_data = ip_prefixes.create_iface(
- node=node, name="eth0", mac="00:00:00:00:aa:00"
-)
-```
-
-### Listening to Events
-
-Various events that can occur within a session can be listened to.
-
-Event types:
-
-* session - events for changes in session state and mobility start/stop/pause
-* node - events for node movements and icon changes
-* link - events for link configuration changes and wireless link add/delete
-* config - configuration events when legacy gui joins a session
-* exception - alert/error events
-* file - file events when the legacy gui joins a session
-
-```python
-def event_listener(event):
- print(event)
-
-
-# add an event listener to event type you want to listen to
-# each handler will receive an object unique to that type
-session.event_handlers.append(event_listener)
-session.exception_handlers.append(event_listener)
-session.node_handlers.append(event_listener)
-session.link_handlers.append(event_listener)
-session.file_handlers.append(event_listener)
-session.config_handlers.append(event_listener)
-```
-
-### Configuring Links
-
-Links can be configured at the time of creation or during runtime.
-
-Currently supported configuration options:
-
-* bandwidth (bps)
-* delay (us)
-* dup (%)
-* jitter (us)
-* loss (%)
-
-```python
-from core.emulator.data import LinkOptions
-
-# configuring when creating a link
-options = LinkOptions(
- bandwidth=54_000_000,
- delay=5000,
- dup=5,
- loss=5.5,
- jitter=0,
-)
-session.add_link(n1_id, n2_id, iface1_data, iface2_data, options)
-
-# configuring during runtime
-session.update_link(n1_id, n2_id, iface1_id, iface2_id, options)
-```
-
-### Peer to Peer Example
-
-```python
-# required imports
-from core.emulator.coreemu import CoreEmu
-from core.emulator.data import IpPrefixes
-from core.emulator.enumerations import EventTypes
-from core.nodes.base import CoreNode, Position
-
-# ip nerator for example
-ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
-
-# create emulator instance for creating sessions and utility methods
-coreemu = CoreEmu()
-session = coreemu.create_session()
-
-# must be in configuration state for nodes to start, when using "node_add" below
-session.set_state(EventTypes.CONFIGURATION_STATE)
-
-# create nodes
-position = Position(x=100, y=100)
-n1 = session.add_node(CoreNode, position=position)
-position = Position(x=300, y=100)
-n2 = session.add_node(CoreNode, position=position)
-
-# link nodes together
-iface1 = ip_prefixes.create_iface(n1)
-iface2 = ip_prefixes.create_iface(n2)
-session.add_link(n1.id, n2.id, iface1, iface2)
-
-# start session
-session.instantiate()
-
-# do whatever you like here
-input("press enter to shutdown")
-
-# stop session
-session.shutdown()
-```
-
-### Switch/Hub Example
-
-```python
-# required imports
-from core.emulator.coreemu import CoreEmu
-from core.emulator.data import IpPrefixes
-from core.emulator.enumerations import EventTypes
-from core.nodes.base import CoreNode, Position
-from core.nodes.network import SwitchNode
-
-# ip nerator for example
-ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
-
-# create emulator instance for creating sessions and utility methods
-coreemu = CoreEmu()
-session = coreemu.create_session()
-
-# must be in configuration state for nodes to start, when using "node_add" below
-session.set_state(EventTypes.CONFIGURATION_STATE)
-
-# create switch
-position = Position(x=200, y=200)
-switch = session.add_node(SwitchNode, position=position)
-
-# create nodes
-position = Position(x=100, y=100)
-n1 = session.add_node(CoreNode, position=position)
-position = Position(x=300, y=100)
-n2 = session.add_node(CoreNode, position=position)
-
-# link nodes to switch
-iface1 = ip_prefixes.create_iface(n1)
-session.add_link(n1.id, switch.id, iface1)
-iface1 = ip_prefixes.create_iface(n2)
-session.add_link(n2.id, switch.id, iface1)
-
-# start session
-session.instantiate()
-
-# do whatever you like here
-input("press enter to shutdown")
-
-# stop session
-session.shutdown()
-```
-
-### WLAN Example
-
-```python
-# required imports
-from core.emulator.coreemu import CoreEmu
-from core.emulator.data import IpPrefixes
-from core.emulator.enumerations import EventTypes
-from core.location.mobility import BasicRangeModel
-from core.nodes.base import CoreNode, Position
-from core.nodes.network import WlanNode
-
-# ip nerator for example
-ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
-
-# create emulator instance for creating sessions and utility methods
-coreemu = CoreEmu()
-session = coreemu.create_session()
-
-# must be in configuration state for nodes to start, when using "node_add" below
-session.set_state(EventTypes.CONFIGURATION_STATE)
-
-# create wlan
-position = Position(x=200, y=200)
-wlan = session.add_node(WlanNode, position=position)
-
-# create nodes
-options = CoreNode.create_options()
-options.model = "mdr"
-position = Position(x=100, y=100)
-n1 = session.add_node(CoreNode, position=position, options=options)
-position = Position(x=300, y=100)
-n2 = session.add_node(CoreNode, position=position, options=options)
-
-# configuring wlan
-session.mobility.set_model_config(wlan.id, BasicRangeModel.name, {
- "range": "280",
- "bandwidth": "55000000",
- "delay": "6000",
- "jitter": "5",
- "error": "5",
-})
-
-# link nodes to wlan
-iface1 = ip_prefixes.create_iface(n1)
-session.add_link(n1.id, wlan.id, iface1)
-iface1 = ip_prefixes.create_iface(n2)
-session.add_link(n2.id, wlan.id, iface1)
-
-# start session
-session.instantiate()
-
-# do whatever you like here
-input("press enter to shutdown")
-
-# stop session
-session.shutdown()
-```
-
-### EMANE Example
-
-For EMANE you can import and use one of the existing models and
-use its name for configuration.
-
-Current models:
-
-* core.emane.ieee80211abg.EmaneIeee80211abgModel
-* core.emane.rfpipe.EmaneRfPipeModel
-* core.emane.tdma.EmaneTdmaModel
-* core.emane.bypass.EmaneBypassModel
-
-Their configurations options are driven dynamically from parsed EMANE manifest files
-from the installed version of EMANE.
-
-Options and their purpose can be found at the [EMANE Wiki](https://github.com/adjacentlink/emane/wiki).
-
-If configuring EMANE global settings or model mac/phy specific settings, any value not provided
-will use the defaults. When no configuration is used, the defaults are used.
-
-```python
-# required imports
-from core.emane.models.ieee80211abg import EmaneIeee80211abgModel
-from core.emane.nodes import EmaneNet
-from core.emulator.coreemu import CoreEmu
-from core.emulator.data import IpPrefixes
-from core.emulator.enumerations import EventTypes
-from core.nodes.base import CoreNode, Position
-
-# ip nerator for example
-ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24")
-
-# create emulator instance for creating sessions and utility methods
-coreemu = CoreEmu()
-session = coreemu.create_session()
-
-# location information is required to be set for emane
-session.location.setrefgeo(47.57917, -122.13232, 2.0)
-session.location.refscale = 150.0
-
-# must be in configuration state for nodes to start, when using "node_add" below
-session.set_state(EventTypes.CONFIGURATION_STATE)
-
-# create emane
-options = EmaneNet.create_options()
-options.emane_model = EmaneIeee80211abgModel.name
-position = Position(x=200, y=200)
-emane = session.add_node(EmaneNet, position=position, options=options)
-
-# create nodes
-options = CoreNode.create_options()
-options.model = "mdr"
-position = Position(x=100, y=100)
-n1 = session.add_node(CoreNode, position=position, options=options)
-position = Position(x=300, y=100)
-n2 = session.add_node(CoreNode, position=position, options=options)
-
-# configure general emane settings
-config = session.emane.get_configs()
-config.update({
- "eventservicettl": "2"
-})
-
-# configure emane model settings
-# using a dict mapping currently support values as strings
-session.emane.set_model_config(emane.id, EmaneIeee80211abgModel.name, {
- "unicastrate": "3",
-})
-
-# link nodes to emane
-iface1 = ip_prefixes.create_iface(n1)
-session.add_link(n1.id, emane.id, iface1)
-iface1 = ip_prefixes.create_iface(n2)
-session.add_link(n2.id, emane.id, iface1)
-
-# start session
-session.instantiate()
-
-# do whatever you like here
-input("press enter to shutdown")
-
-# stop session
-session.shutdown()
-```
-
-EMANE Model Configuration:
-
-```python
-from core import utils
-
-# standardized way to retrieve an appropriate config id
-# iface id can be omitted, to allow a general configuration for a model, per node
-config_id = utils.iface_config_id(node.id, iface_id)
-# set emane configuration for the config id
-session.emane.set_config(config_id, EmaneIeee80211abgModel.name, {
- "unicastrate": "3",
-})
-```
-
-## Configuring a Service
-
-Services help generate and run bash scripts on nodes for a given purpose.
-
-Configuring the files of a service results in a specific hard coded script being
-generated, instead of the default scripts, that may leverage dynamic generation.
-
-The following features can be configured for a service:
-
-* configs - files that will be generated
-* dirs - directories that will be mounted unique to the node
-* startup - commands to run start a service
-* validate - commands to run to validate a service
-* shutdown - commands to run to stop a service
-
-Editing service properties:
-
-```python
-# configure a service, for a node, for a given session
-session.services.set_service(node_id, service_name)
-service = session.services.get_service(node_id, service_name)
-service.configs = ("file1.sh", "file2.sh")
-service.dirs = ("/etc/node",)
-service.startup = ("bash file1.sh",)
-service.validate = ()
-service.shutdown = ()
-```
-
-When editing a service file, it must be the name of `config`
-file that the service will generate.
-
-Editing a service file:
-
-```python
-# to edit the contents of a generated file you can specify
-# the service, the file name, and its contents
-session.services.set_service_file(
- node_id,
- service_name,
- file_name,
- "echo hello",
-)
-```
-
-## File Examples
-
-File versions of the network examples can be found
-[here](https://github.com/coreemu/core/tree/master/package/examples/python).
-
-## Executing Scripts from GUI
-
-To execute a python script from a GUI you need have the following.
-
-The builtin name check here to know it is being executed from the GUI, this can
-be avoided if your script does not use a name check.
-
-```python
-if __name__ in ["__main__", "__builtin__"]:
- main()
-```
-
-A script can add sessions to the core-daemon. A global *coreemu* variable is
-exposed to the script pointing to the *CoreEmu* object.
-
-The example below has a fallback to a new CoreEmu object, in the case you would
-like to run the script standalone, outside of the core-daemon.
-
-```python
-coreemu = globals().get("coreemu") or CoreEmu()
-session = coreemu.create_session()
-```
diff --git a/docs/scripting.md b/docs/scripting.md
new file mode 100644
index 00000000..0b0ca47f
--- /dev/null
+++ b/docs/scripting.md
@@ -0,0 +1,120 @@
+
+# CORE Python Scripting
+
+* Table of Contents
+{:toc}
+
+## Overview
+
+CORE can be used via the GUI or Python scripting. Writing your own Python scripts offers a rich programming environment with complete control over all aspects of the emulation. This chapter provides a brief introduction to scripting. Most of the documentation is available from sample scripts, or online via interactive Python.
+
+The best starting point is the sample scripts that are included with CORE. If you have a CORE source tree, the example script files can be found under *core/daemon/examples/api/*. When CORE is installed from packages, the example script files will be in */usr/share/core/examples/api/* (or */usr/local/* prefix when installed from source.) For the most part, the example scripts are self-documenting; see the comments contained within the Python code.
+
+The scripts should be run with root privileges because they create new network namespaces. In general, a CORE Python script does not connect to the CORE daemon, in fact the *core-daemon* is just another Python script that uses the CORE Python modules and exchanges messages with the GUI. To connect the GUI to your scripts, see the included sample scripts that allow for GUI connections.
+
+Here are the basic elements of a CORE Python script:
+
+```python
+from core.emulator.coreemu import CoreEmu
+from core.emulator.emudata import IpPrefixes
+from core.enumerations import EventTypes
+from core.enumerations import NodeTypes
+
+# ip generator for example
+prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16")
+
+# create emulator instance for creating sessions and utility methods
+coreemu = CoreEmu()
+session = coreemu.create_session()
+
+# must be in configuration state for nodes to start, when using "node_add" below
+session.set_state(EventTypes.CONFIGURATION_STATE)
+
+# create switch network node
+switch = session.add_node(_type=NodeTypes.SWITCH)
+
+# create nodes
+for _ in xrange(options.nodes):
+ node = session.add_node()
+ interface = prefixes.create_interface(node)
+ session.add_link(node.objid, switch.objid, interface_one=interface)
+
+# instantiate session
+session.instantiate()
+
+# shutdown session
+coreemu.shutdown()
+```
+
+The above script creates a CORE session having two nodes connected with a hub. The first node pings the second node with 5 ping packets; the result is displayed on screen.
+
+A good way to learn about the CORE Python modules is via interactive Python. Scripts can be run using *python -i*. Cut and paste the simple script above and you will have two nodes connected by a hub, with one node running a test ping to the other.
+
+The CORE Python modules are documented with comments in the code. From an interactive Python shell, you can retrieve online help about the various classes and methods; for example *help(nodes.CoreNode)* or *help(Session)*.
+
+**NOTE: The CORE daemon *core-daemon* manages a list of sessions and allows the GUI to connect and control sessions. Your Python script uses the same CORE modules but runs independently of the daemon. The daemon does not need to be running for your script to work.**
+
+The session created by a Python script may be viewed in the GUI if certain steps are followed. The GUI has a *File Menu*, *Execute Python script...* option for running a script and automatically connecting to it. Once connected, normal GUI interaction is possible, such as moving and double-clicking nodes, activating Widgets, etc.
+
+The script should have a line such as the following for running it from the GUI.
+
+```python
+if __name__ in ["__main__", "__builtin__"]:
+ main()
+```
+
+A script can add sessions to the core-daemon. A global *coreemu* variable is exposed to the script pointing to the *CoreEmu* object.
+The example below has a fallback to a new CoreEmu object, in the case you would like to run the script standalone, outside of the core-daemon.
+
+```python
+coreemu = globals().get("coreemu", CoreEmu())
+session = coreemu.create_session()
+```
+
+Finally, nodes and networks need to have their coordinates set to something, otherwise they will be grouped at the coordinates *<0, 0>*. First sketching the topology in the GUI and then using the *Export Python script* option may help here.
+
+```python
+switch.setposition(x=80,y=50)
+```
+
+A fully-worked example script that you can launch from the GUI is available in the examples directory.
+
+## Configuring Services
+
+Examples setting or configuring custom services for a node.
+
+```python
+# create session and node
+coreemu = CoreEmu()
+session = coreemu.create_session()
+node = session.add_node()
+
+# create and retrieve custom service
+session.services.set_service(node.objid, "ServiceName")
+custom_service = session.services.get_service(node.objid, "ServiceName")
+
+# set custom file data
+session.services.set_service_file(node.objid, "ServiceName", "FileName", "custom file data")
+
+# set services to a node, using custom services when defined
+session.services.add_services(node, node.type, ["Service1", "Service2"])
+```
+
+# Configuring EMANE Models
+
+Examples for configuring custom emane model settings.
+
+```python
+# create session and emane network
+coreemu = CoreEmu()
+session = coreemu.create_session()
+emane_network = session.create_emane_network(
+ model=EmaneIeee80211abgModel,
+ geo_reference=(47.57917, -122.13232, 2.00000)
+)
+emane_network.setposition(x=80, y=50)
+
+# set custom emane model config
+config = {}
+session.emane.set_model_config(emane_network.objid, EmaneIeee80211abgModel.name, config)
+```
diff --git a/docs/services.md b/docs/services.md
index 9e6e3642..793e6f99 100644
--- a/docs/services.md
+++ b/docs/services.md
@@ -1,299 +1,13 @@
-# Services (Deprecated)
+# CORE Services
-## Overview
+* Table of Contents
+{:toc}
-CORE uses the concept of services to specify what processes or scripts run on a
-node when it is started. Layer-3 nodes such as routers and PCs are defined by
-the services that they run.
+## Custom Services
-Services may be customized for each node, or new custom services can be
-created. New node types can be created each having a different name, icon, and
-set of default services. Each service defines the per-node directories,
-configuration files, startup index, starting commands, validation commands,
-shutdown commands, and meta-data associated with a node.
+CORE supports custom developed services by way of dynamically loading user created python files.
+Custom services should be placed within the path defined by **custom_services_dir** in the CORE
+configuration file. This path cannot end in **/services**.
-!!! note
-
- **Network namespace nodes do not undergo the normal Linux boot process**
- using the **init**, **upstart**, or **systemd** frameworks. These
- lightweight nodes use configured CORE *services*.
-
-## Available Services
-
-| Service Group | Services |
-|----------------------------------|-----------------------------------------------------------------------|
-| [BIRD](services/bird.md) | BGP, OSPF, RADV, RIP, Static |
-| [EMANE](services/emane.md) | Transport Service |
-| [FRR](services/frr.md) | BABEL, BGP, OSPFv2, OSPFv3, PIMD, RIP, RIPNG, Zebra |
-| [NRL](services/nrl.md) | arouted, MGEN Sink, MGEN Actor, NHDP, OLSR, OLSRORG, OLSRv2, SMF |
-| [Quagga](services/quagga.md) | BABEL, BGP, OSPFv2, OSPFv3, OSPFv3 MDR, RIP, RIPNG, XPIMD, Zebra |
-| [SDN](services/sdn.md) | OVS, RYU |
-| [Security](services/security.md) | Firewall, IPsec, NAT, VPN Client, VPN Server |
-| [Utility](services/utility.md) | ATD, Routing Utils, DHCP, FTP, IP Forward, PCAP, RADVD, SSF, UCARP |
-| [XORP](services/xorp.md) | BGP, OLSR, OSPFv2, OSPFv3, PIMSM4, PIMSM6, RIP, RIPNG, Router Manager |
-
-## Node Types and Default Services
-
-Here are the default node types and their services:
-
-| Node Type | Services |
-|-----------|--------------------------------------------------------------------------------------------------------------------------------------------|
-| *router* | zebra, OSFPv2, OSPFv3, and IPForward services for IGP link-state routing. |
-| *host* | DefaultRoute and SSH services, representing an SSH server having a default route when connected directly to a router. |
-| *PC* | DefaultRoute service for having a default route when connected directly to a router. |
-| *mdr* | zebra, OSPFv3MDR, and IPForward services for wireless-optimized MANET Designated Router routing. |
-| *prouter* | a physical router, having the same default services as the *router* node type; for incorporating Linux testbed machines into an emulation. |
-
-Configuration files can be automatically generated by each service. For
-example, CORE automatically generates routing protocol configuration for the
-router nodes in order to simplify the creation of virtual networks.
-
-To change the services associated with a node, double-click on the node to
-invoke its configuration dialog and click on the *Services...* button,
-or right-click a node a choose *Services...* from the menu.
-Services are enabled or disabled by clicking on their names. The button next to
-each service name allows you to customize all aspects of this service for this
-node. For example, special route redistribution commands could be inserted in
-to the Quagga routing configuration associated with the zebra service.
-
-To change the default services associated with a node type, use the Node Types
-dialog available from the *Edit* button at the end of the Layer-3 nodes
-toolbar, or choose *Node types...* from the *Session* menu. Note that
-any new services selected are not applied to existing nodes if the nodes have
-been customized.
-
-## Customizing a Service
-
-A service can be fully customized for a particular node. From the node's
-configuration dialog, click on the button next to the service name to invoke
-the service customization dialog for that service.
-The dialog has three tabs for configuring the different aspects of the service:
-files, directories, and startup/shutdown.
-
-!!! note
-
- A **yellow** customize icon next to a service indicates that service
- requires customization (e.g. the *Firewall* service).
- A **green** customize icon indicates that a custom configuration exists.
- Click the *Defaults* button when customizing a service to remove any
- customizations.
-
-The Files tab is used to display or edit the configuration files or scripts that
-are used for this service. Files can be selected from a drop-down list, and
-their contents are displayed in a text entry below. The file contents are
-generated by the CORE daemon based on the network topology that exists at
-the time the customization dialog is invoked.
-
-The Directories tab shows the per-node directories for this service. For the
-default types, CORE nodes share the same filesystem tree, except for these
-per-node directories that are defined by the services. For example, the
-**/var/run/quagga** directory needs to be unique for each node running
-the Zebra service, because Quagga running on each node needs to write separate
-PID files to that directory.
-
-!!! note
-
- The **/var/log** and **/var/run** directories are
- mounted uniquely per-node by default.
- Per-node mount targets can be found in **/tmp/pycore./.conf/**
-
-The Startup/shutdown tab lists commands that are used to start and stop this
-service. The startup index allows configuring when this service starts relative
-to the other services enabled for this node; a service with a lower startup
-index value is started before those with higher values. Because shell scripts
-generated by the Files tab will not have execute permissions set, the startup
-commands should include the shell name, with
-something like ```sh script.sh```.
-
-Shutdown commands optionally terminate the process(es) associated with this
-service. Generally they send a kill signal to the running process using the
-*kill* or *killall* commands. If the service does not terminate
-the running processes using a shutdown command, the processes will be killed
-when the *vnoded* daemon is terminated (with *kill -9*) and
-the namespace destroyed. It is a good practice to
-specify shutdown commands, which will allow for proper process termination, and
-for run-time control of stopping and restarting services.
-
-Validate commands are executed following the startup commands. A validate
-command can execute a process or script that should return zero if the service
-has started successfully, and have a non-zero return value for services that
-have had a problem starting. For example, the *pidof* command will check
-if a process is running and return zero when found. When a validate command
-produces a non-zero return value, an exception is generated, which will cause
-an error to be displayed in the Check Emulation Light.
-
-!!! note
-
- To start, stop, and restart services during run-time, right-click a
- node and use the *Services...* menu.
-
-## New Services
-
-Services can save time required to configure nodes, especially if a number
-of nodes require similar configuration procedures. New services can be
-introduced to automate tasks.
-
-### Leveraging UserDefined
-
-The easiest way to capture the configuration of a new process into a service
-is by using the **UserDefined** service. This is a blank service where any
-aspect may be customized. The UserDefined service is convenient for testing
-ideas for a service before adding a new service type.
-
-### Creating New Services
-
-!!! note
-
- The directory name used in **custom_services_dir** below should be unique and
- should not correspond to any existing Python module name. For example, don't
- use the name **subprocess** or **services**.
-
-1. Modify the example service shown below
- to do what you want. It could generate config/script files, mount per-node
- directories, start processes/scripts, etc. sample.py is a Python file that
- defines one or more classes to be imported. You can create multiple Python
- files that will be imported.
-
-2. Put these files in a directory such as `/home//.coregui/custom_services`
- Note that the last component of this directory name **custom_services** should not
- be named the same as any python module, due to naming conflicts.
-
-3. Add a **custom_services_dir = `/home//.coregui/custom_services`** entry to the
- /etc/core/core.conf file.
-
-4. Restart the CORE daemon (core-daemon). Any import errors (Python syntax)
- should be displayed in the daemon output.
-
-5. Start using your custom service on your nodes. You can create a new node
- type that uses your service, or change the default services for an existing
- node type, or change individual nodes.
-
-If you have created a new service type that may be useful to others, please
-consider contributing it to the CORE project.
-
-#### Example Custom Service
-
-Below is the skeleton for a custom service with some documentation. Most
-people would likely only setup the required class variables **(name/group)**.
-Then define the **configs** (files they want to generate) and implement the
-**generate_config** function to dynamically create the files wanted. Finally
-the **startup** commands would be supplied, which typically tends to be
-running the shell files generated.
-
-```python
-"""
-Simple example custom service, used to drive shell commands on a node.
-"""
-from typing import Tuple
-
-from core.nodes.base import CoreNode
-from core.services.coreservices import CoreService, ServiceMode
-
-
-class ExampleService(CoreService):
- """
- Example Custom CORE Service
-
- :cvar name: name used as a unique ID for this service and is required, no spaces
- :cvar group: allows you to group services within the GUI under a common name
- :cvar executables: executables this service depends on to function, if executable is
- not on the path, service will not be loaded
- :cvar dependencies: services that this service depends on for startup, tuple of
- service names
- :cvar dirs: directories that this service will create within a node
- :cvar configs: files that this service will generate, without a full path this file
- goes in the node's directory e.g. /tmp/pycore.12345/n1.conf/myfile
- :cvar startup: commands used to start this service, any non-zero exit code will
- cause a failure
- :cvar validate: commands used to validate that a service was started, any non-zero
- exit code will cause a failure
- :cvar validation_mode: validation mode, used to determine startup success.
- NON_BLOCKING - runs startup commands, and validates success with validation commands
- BLOCKING - runs startup commands, and validates success with the startup commands themselves
- TIMER - runs startup commands, and validates success by waiting for "validation_timer" alone
- :cvar validation_timer: time in seconds for a service to wait for validation, before
- determining success in TIMER/NON_BLOCKING modes.
- :cvar validation_period: period in seconds to wait before retrying validation,
- only used in NON_BLOCKING mode
- :cvar shutdown: shutdown commands to stop this service
- """
-
- name: str = "ExampleService"
- group: str = "Utility"
- executables: Tuple[str, ...] = ()
- dependencies: Tuple[str, ...] = ()
- dirs: Tuple[str, ...] = ()
- configs: Tuple[str, ...] = ("myservice1.sh", "myservice2.sh")
- startup: Tuple[str, ...] = tuple(f"sh {x}" for x in configs)
- validate: Tuple[str, ...] = ()
- validation_mode: ServiceMode = ServiceMode.NON_BLOCKING
- validation_timer: int = 5
- validation_period: float = 0.5
- shutdown: Tuple[str, ...] = ()
-
- @classmethod
- def on_load(cls) -> None:
- """
- Provides a way to run some arbitrary logic when the service is loaded, possibly
- to help facilitate dynamic settings for the environment.
-
- :return: nothing
- """
- pass
-
- @classmethod
- def get_configs(cls, node: CoreNode) -> Tuple[str, ...]:
- """
- Provides a way to dynamically generate the config files from the node a service
- will run. Defaults to the class definition and can be left out entirely if not
- needed.
-
- :param node: core node that the service is being ran on
- :return: tuple of config files to create
- """
- return cls.configs
-
- @classmethod
- def generate_config(cls, node: CoreNode, filename: str) -> str:
- """
- Returns a string representation for a file, given the node the service is
- starting on the config filename that this information will be used for. This
- must be defined, if "configs" are defined.
-
- :param node: core node that the service is being ran on
- :param filename: configuration file to generate
- :return: configuration file content
- """
- cfg = "#!/bin/sh\n"
- if filename == cls.configs[0]:
- cfg += "# auto-generated by MyService (sample.py)\n"
- for iface in node.get_ifaces():
- cfg += f'echo "Node {node.name} has interface {iface.name}"\n'
- elif filename == cls.configs[1]:
- cfg += "echo hello"
- return cfg
-
- @classmethod
- def get_startup(cls, node: CoreNode) -> Tuple[str, ...]:
- """
- Provides a way to dynamically generate the startup commands from the node a
- service will run. Defaults to the class definition and can be left out entirely
- if not needed.
-
- :param node: core node that the service is being ran on
- :return: tuple of startup commands to run
- """
- return cls.startup
-
- @classmethod
- def get_validate(cls, node: CoreNode) -> Tuple[str, ...]:
- """
- Provides a way to dynamically generate the validate commands from the node a
- service will run. Defaults to the class definition and can be left out entirely
- if not needed.
-
- :param node: core node that the service is being ran on
- :return: tuple of commands to validate service startup with
- """
- return cls.validate
-```
+Here is an example service with documentation describing functionality:
+[Example Service](exampleservice.html)
diff --git a/docs/services/bird.md b/docs/services/bird.md
deleted file mode 100644
index db2f7701..00000000
--- a/docs/services/bird.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# BIRD Internet Routing Daemon
-
-## Overview
-
-The [BIRD Internet Routing Daemon](https://bird.network.cz/) is a routing
-daemon; i.e., a software responsible for managing kernel packet forwarding
-tables. It aims to develop a dynamic IP routing daemon with full support of
-all modern routing protocols, easy to use configuration interface and powerful
-route filtering language, primarily targeted on (but not limited to) Linux and
-other UNIX-like systems and distributed under the GNU General Public License.
-BIRD has a free implementation of several well known and common routing and
-router-supplemental protocols, namely RIP, RIPng, OSPFv2, OSPFv3, BGP, BFD,
-and NDP/RA. BIRD supports IPv4 and IPv6 address families, Linux kernel and
-several BSD variants (tested on FreeBSD, NetBSD and OpenBSD). BIRD consists
-of bird daemon and birdc interactive CLI client used for supervision.
-
-In order to be able to use the BIRD Internet Routing Protocol, you must first
-install the project on your machine.
-
-## BIRD Package Install
-
-```shell
-sudo apt-get install bird
-```
-
-## BIRD Source Code Install
-
-You can download BIRD source code from its
-[official repository.](https://gitlab.labs.nic.cz/labs/bird/)
-
-```shell
-./configure
-make
-su
-make install
-vi /etc/bird/bird.conf
-```
-
-The installation will place the bird directory inside */etc* where you will
-also find its config file.
-
-In order to be able to do use the Bird Internet Routing Protocol, you must
-modify *bird.conf* due to the fact that the given configuration file is not
-configured beyond allowing the bird daemon to start, which means that nothing
-else will happen if you run it.
diff --git a/docs/services/emane.md b/docs/services/emane.md
deleted file mode 100644
index 3f904091..00000000
--- a/docs/services/emane.md
+++ /dev/null
@@ -1,10 +0,0 @@
-# EMANE Services
-
-## Overview
-
-EMANE related services for CORE.
-
-## Transport Service
-
-Helps with setting up EMANE for using an external transport.
-
diff --git a/docs/services/frr.md b/docs/services/frr.md
deleted file mode 100644
index aa2db6ff..00000000
--- a/docs/services/frr.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# FRRouting
-
-## Overview
-
-FRRouting is a routing software package that provides TCP/IP based routing services with routing protocols support such
-as BGP, RIP, OSPF, IS-IS and more. FRR also supports special BGP Route Reflector and Route Server behavior. In addition
-to traditional IPv4 routing protocols, FRR also supports IPv6 routing protocols. With an SNMP daemon that supports the
-AgentX protocol, FRR provides routing protocol MIB read-only access (SNMP Support).
-
-FRR (as of v7.2) currently supports the following protocols:
-
-* BGPv4
-* OSPFv2
-* OSPFv3
-* RIPv1/v2/ng
-* IS-IS
-* PIM-SM/MSDP/BSM(AutoRP)
-* LDP
-* BFD
-* Babel
-* PBR
-* OpenFabric
-* VRRPv2/v3
-* EIGRP (alpha)
-* NHRP (alpha)
-
-## FRRouting Package Install
-
-Ubuntu 19.10 and later
-
-```shell
-sudo apt update && sudo apt install frr
-```
-
-Ubuntu 16.04 and Ubuntu 18.04
-
-```shell
-sudo apt install curl
-curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
-FRRVER="frr-stable"
-echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
-sudo apt update && sudo apt install frr frr-pythontools
-```
-
-Fedora 31
-
-```shell
-sudo dnf update && sudo dnf install frr
-```
-
-## FRRouting Source Code Install
-
-Building FRR from source is the best way to ensure you have the latest features and bug fixes. Details for each
-supported platform, including dependency package listings, permissions, and other gotchas, are in the developer’s
-documentation.
-
-FRR’s source is available on the project [GitHub page](https://github.com/FRRouting/frr).
-
-```shell
-git clone https://github.com/FRRouting/frr.git
-```
-
-Change into your FRR source directory and issue:
-
-```shell
-./bootstrap.sh
-```
-
-Then, choose the configuration options that you wish to use for the installation. You can find these options on
-FRR's [official webpage](http://docs.frrouting.org/en/latest/installation.html). Once you have chosen your configure
-options, run the configure script and pass the options you chose:
-
-```shell
-./configure \
- --prefix=/usr \
- --enable-exampledir=/usr/share/doc/frr/examples/ \
- --localstatedir=/var/run/frr \
- --sbindir=/usr/lib/frr \
- --sysconfdir=/etc/frr \
- --enable-pimd \
- --enable-watchfrr \
- ...
-```
-
-After configuring the software, you are ready to build and install it in your system.
-
-```shell
-make && sudo make install
-```
-
-If everything finishes successfully, FRR should be installed.
diff --git a/docs/services/nrl.md b/docs/services/nrl.md
deleted file mode 100644
index da26ab25..00000000
--- a/docs/services/nrl.md
+++ /dev/null
@@ -1,86 +0,0 @@
-# NRL Services
-
-## Overview
-
-The Protean Protocol Prototyping Library (ProtoLib) is a cross-platform library that allows applications to be built
-while supporting a variety of platforms including Linux, Windows, WinCE/PocketPC, MacOS, FreeBSD, Solaris, etc as well
-as the simulation environments of NS2 and Opnet. The goal of the Protolib is to provide a set of simple, cross-platform
-C++ classes that allow development of network protocols and applications that can run on different platforms and in
-network simulation environments. While Protolib provides an overall framework for developing working protocol
-implementations, applications, and simulation modules, the individual classes are designed for use as stand-alone
-components when possible. Although Protolib is principally for research purposes, the code has been constructed to
-provide robust, efficient performance and adaptability to real applications. In some cases, the code consists of data
-structures, etc useful in protocol implementations and, in other cases, provides common, cross-platform interfaces to
-system services and functions (e.g., sockets, timers, routing tables, etc).
-
-Currently, the Naval Research Laboratory uses this library to develop a wide variety of protocols.The NRL Protolib
-currently supports the following protocols:
-
-* MGEN_Sink
-* NHDP
-* SMF
-* OLSR
-* OLSRv2
-* OLSRORG
-* MgenActor
-* arouted
-
-## NRL Installation
-
-In order to be able to use the different protocols that NRL offers, you must first download the support library itself.
-You can get the source code from their [NRL Protolib Repo](https://github.com/USNavalResearchLaboratory/protolib).
-
-## Multi-Generator (MGEN)
-
-Download MGEN from the [NRL MGEN Repo](https://github.com/USNavalResearchLaboratory/mgen), unpack it and copy the
-protolib library into the main folder *mgen*. Execute the following commands to build the protocol.
-
-```shell
-cd mgen/makefiles
-make -f Makefile.{os} mgen
-```
-
-## Neighborhood Discovery Protocol (NHDP)
-
-Download NHDP from the [NRL NHDP Repo](https://github.com/USNavalResearchLaboratory/NCS-Downloads/tree/master/nhdp).
-
-```shell
-sudo apt-get install libpcap-dev libboost-all-dev
-wget https://github.com/protocolbuffers/protobuf/releases/download/v3.8.0/protoc-3.8.0-linux-x86_64.zip
-unzip protoc-3.8.0-linux-x86_64.zip
-```
-
-Then place the binaries in your $PATH. To know your paths you can issue the following command
-
-```shell
-echo $PATH
-```
-
-Go to the downloaded *NHDP* tarball, unpack it and place the protolib library inside the NHDP main folder. Now, compile
-the NHDP Protocol.
-
-```shell
-cd nhdp/unix
-make -f Makefile.{os}
-```
-
-## Simplified Multicast Forwarding (SMF)
-
-Download SMF from the [NRL SMF Repo](https://github.com/USNavalResearchLaboratory/nrlsmf) , unpack it and place the
-protolib library inside the *smf* main folder.
-
-```shell
-cd mgen/makefiles
-make -f Makefile.{os}
-```
-
-## Optimized Link State Routing Protocol (OLSR)
-
-To install the OLSR protocol, download their source code from
-their [NRL OLSR Repo](https://github.com/USNavalResearchLaboratory/nrlolsr). Unpack it and place the previously
-downloaded protolib library inside the *nrlolsr* main directory. Then execute the following commands:
-
-```shell
-cd ./unix
-make -f Makefile.{os}
-```
diff --git a/docs/services/quagga.md b/docs/services/quagga.md
deleted file mode 100644
index 6842b5e7..00000000
--- a/docs/services/quagga.md
+++ /dev/null
@@ -1,32 +0,0 @@
-# Quagga Routing Suite
-
-## Overview
-
-Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix
-platforms, particularly FreeBSD, Linux, Solaris and NetBSD. Quagga is a fork of GNU Zebra which was developed by
-Kunihiro Ishiguro.
-The Quagga architecture consists of a core daemon, zebra, which acts as an abstraction layer to the underlying Unix
-kernel and presents the Zserv API over a Unix or TCP stream to Quagga clients. It is these Zserv clients which typically
-implement a routing protocol and communicate routing updates to the zebra daemon.
-
-## Quagga Package Install
-
-```shell
-sudo apt-get install quagga
-```
-
-## Quagga Source Install
-
-First, download the source code from their [official webpage](https://www.quagga.net/).
-
-```shell
-sudo apt-get install gawk
-```
-
-Extract the tarball, go to the directory of your currently extracted code and issue the following commands.
-
-```shell
-./configure
-make
-sudo make install
-```
diff --git a/docs/services/sdn.md b/docs/services/sdn.md
deleted file mode 100644
index 05e8606e..00000000
--- a/docs/services/sdn.md
+++ /dev/null
@@ -1,30 +0,0 @@
-# Software Defined Networking
-
-## Overview
-
-Ryu is a component-based software defined networking framework. Ryu provides software components with well defined API
-that make it easy for developers to create new network management and control applications. Ryu supports various
-protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About OpenFlow, Ryu supports fully
-1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely available under the Apache 2.0 license.
-
-## Installation
-
-### Prerequisites
-
-```shell
-sudo apt-get install gcc python-dev libffi-dev libssl-dev libxml2-dev libxslt1-dev zlib1g-dev
-```
-
-### Ryu Package Install
-
-```shell
-pip install ryu
-```
-
-### Ryu Source Install
-
-```shell
-git clone git://github.com/osrg/ryu.git
-cd ryu
-pip install .
-```
diff --git a/docs/services/security.md b/docs/services/security.md
deleted file mode 100644
index a621009d..00000000
--- a/docs/services/security.md
+++ /dev/null
@@ -1,90 +0,0 @@
-# Security Services
-
-## Overview
-
-The security services offer a wide variety of protocols capable of satisfying the most use cases available. Security
-services such as IP security protocols, for providing security at the IP layer, as well as the suite of protocols
-designed to provide that security, through authentication and encryption of IP network packets. Virtual Private
-Networks (VPNs) and Firewalls are also available for use to the user.
-
-## Installation
-
-Libraries needed for some security services.
-
-```shell
-sudo apt-get install ipsec-tools racoon
-```
-
-## OpenVPN
-
-Below is a set of instruction for running a very simple OpenVPN client/server scenario.
-
-### Installation
-
-```shell
-# install openvpn
-sudo apt install openvpn
-
-# retrieve easyrsa3 for key/cert generation
-git clone https://github.com/OpenVPN/easy-rsa
-```
-
-### Generating Keys/Certs
-
-```shell
-# navigate into easyrsa3 repo subdirectory that contains built binary
-cd easy-rsa/easyrsa3
-
-# initalize pki
-./easyrsa init-pki
-
-# build ca
-./easyrsa build-ca
-
-# generate and sign server keypair(s)
-SERVER_NAME=server1
-./easyrsa get-req $SERVER_NAME nopass
-./easyrsa sign-req server $SERVER_NAME
-
-# generate and sign client keypair(s)
-CLIENT_NAME=client1
-./easyrsa get-req $CLIENT_NAME nopass
-./easyrsa sign-req client $CLIENT_NAME
-
-# DH generation
-./easyrsa gen-dh
-
-# create directory for keys for CORE to use
-# NOTE: the default is set to a directory that requires using sudo, but can be
-# anywhere and not require sudo at all
-KEYDIR=/etc/core/keys
-sudo mkdir $KEYDIR
-
-# move keys to directory
-sudo cp pki/ca.crt $KEYDIR
-sudo cp pki/issued/*.crt $KEYDIR
-sudo cp pki/private/*.key $KEYDIR
-sudo cp pki/dh.pem $KEYDIR/dh1024.pem
-```
-
-### Configure Server Nodes
-
-Add VPNServer service to nodes desired for running an OpenVPN server.
-
-Modify [sampleVPNServer](https://github.com/coreemu/core/blob/master/package/examples/services/sampleVPNServer) for the
-following
-
-* Edit keydir key/cert directory
-* Edit keyname to use generated server name above
-* Edit vpnserver to match an address that the server node will have
-
-### Configure Client Nodes
-
-Add VPNClient service to nodes desired for acting as an OpenVPN client.
-
-Modify [sampleVPNClient](https://github.com/coreemu/core/blob/master/package/examples/services/sampleVPNClient) for the
-following
-
-* Edit keydir key/cert directory
-* Edit keyname to use generated client name above
-* Edit vpnserver to match the address a server was configured to use
diff --git a/docs/services/utility.md b/docs/services/utility.md
deleted file mode 100644
index 698de4f8..00000000
--- a/docs/services/utility.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Utility Services
-
-## Overview
-
-Variety of convenience services for carrying out common networking changes.
-
-The following services are provided as utilities:
-
-* UCARP
-* IP Forward
-* Default Routing
-* Default Muticast Routing
-* Static Routing
-* SSH
-* DHCP
-* DHCP Client
-* FTP
-* HTTP
-* PCAP
-* RADVD
-* ATD
-
-## Installation
-
-To install the functionality of the previously metioned services you can run the following command:
-
-```shell
-sudo apt-get install isc-dhcp-server apache2 libpcap-dev radvd at
-```
-
-## UCARP
-
-UCARP allows a couple of hosts to share common virtual IP addresses in order to provide automatic failover. It is a
-portable userland implementation of the secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's
-alternative to the patents-bloated VRRP).
-
-Strong points of the CARP protocol are: very low overhead, cryptographically signed messages, interoperability between
-different operating systems and no need for any dedicated extra network link between redundant hosts.
-
-### Installation
-
-```shell
-sudo apt-get install ucarp
-```
diff --git a/docs/services/xorp.md b/docs/services/xorp.md
deleted file mode 100644
index a9bd108d..00000000
--- a/docs/services/xorp.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# XORP routing suite
-
-## Overview
-
-XORP is an open networking platform that supports OSPF, RIP, BGP, OLSR, VRRP, PIM, IGMP (Multicast) and other routing
-protocols. Most protocols support IPv4 and IPv6 where applicable. It is known to work on various Linux distributions and
-flavors of BSD.
-
-XORP started life as a project at the ICSI Center for Open Networking (ICON) at the International Computer Science
-Institute in Berkeley, California, USA, and spent some time with the team at XORP, Inc. It is now maintained and
-improved on a volunteer basis by a core of long-term XORP developers and some newer contributors.
-
-XORP's primary goal is to be an open platform for networking protocol implementations and an alternative to proprietary
-and closed networking products in the marketplace today. It is the only open source platform to offer integrated
-multicast capability.
-
-XORP design philosophy is:
-
-* modularity
-* extensibility
-* performance
-* robustness
- This is achieved by carefully separating functionalities into independent modules, and by providing an API for each
- module.
-
-XORP divides into two subsystems. The higher-level ("user-level") subsystem consists of the routing protocols. The
-lower-level ("kernel") manages the forwarding path, and provides APIs for the higher-level to access.
-
-User-level XORP uses multi-process architecture with one process per routing protocol, and a novel inter-process
-communication mechanism called XRL (XORP Resource Locator).
-
-The lower-level subsystem can use traditional UNIX kernel forwarding, or Click modular router. The modularity and
-independency of the lower-level from the user-level subsystem allows for its easily replacement with other solutions
-including high-end hardware-based forwarding engines.
-
-## Installation
-
-In order to be able to install the XORP Routing Suite, you must first install scons in order to compile it.
-
-```shell
-sudo apt-get install scons
-```
-
-Then, download XORP from its official [release web page](http://www.xorp.org/releases/current/).
-
-```shell
-http://www.xorp.org/releases/current/
-cd xorp
-sudo apt-get install libssl-dev ncurses-dev
-scons
-scons install
-```
diff --git a/docs/static/architecture.png b/docs/static/architecture.png
deleted file mode 100644
index f4ce3388..00000000
Binary files a/docs/static/architecture.png and /dev/null differ
diff --git a/docs/static/core-architecture.jpg b/docs/static/core-architecture.jpg
new file mode 100644
index 00000000..04f6390f
Binary files /dev/null and b/docs/static/core-architecture.jpg differ
diff --git a/docs/static/core-gui.png b/docs/static/core-gui.png
deleted file mode 100644
index 6d0fbd40..00000000
Binary files a/docs/static/core-gui.png and /dev/null differ
diff --git a/docs/static/core-workflow.jpg b/docs/static/core-workflow.jpg
new file mode 100644
index 00000000..b60eff7d
Binary files /dev/null and b/docs/static/core-workflow.jpg differ
diff --git a/docs/static/distributed-controlnetwork.png b/docs/static/distributed-controlnetwork.png
new file mode 100644
index 00000000..ed9b0354
Binary files /dev/null and b/docs/static/distributed-controlnetwork.png differ
diff --git a/docs/static/distributed-emane-configuration.png b/docs/static/distributed-emane-configuration.png
new file mode 100644
index 00000000..219e5d43
Binary files /dev/null and b/docs/static/distributed-emane-configuration.png differ
diff --git a/docs/static/distributed-emane-network.png b/docs/static/distributed-emane-network.png
new file mode 100644
index 00000000..ebc5577f
Binary files /dev/null and b/docs/static/distributed-emane-network.png differ
diff --git a/docs/static/emane-configuration.png b/docs/static/emane-configuration.png
deleted file mode 100644
index ad66a6f3..00000000
Binary files a/docs/static/emane-configuration.png and /dev/null differ
diff --git a/docs/static/emane-single-pc.png b/docs/static/emane-single-pc.png
deleted file mode 100644
index 8c58d825..00000000
Binary files a/docs/static/emane-single-pc.png and /dev/null differ
diff --git a/docs/static/gui/host.png b/docs/static/gui/host.png
deleted file mode 100644
index e6efda08..00000000
Binary files a/docs/static/gui/host.png and /dev/null differ
diff --git a/docs/static/gui/hub.png b/docs/static/gui/hub.png
deleted file mode 100644
index c9a2523b..00000000
Binary files a/docs/static/gui/hub.png and /dev/null differ
diff --git a/docs/static/gui/lanswitch.png b/docs/static/gui/lanswitch.png
deleted file mode 100644
index eb9ba593..00000000
Binary files a/docs/static/gui/lanswitch.png and /dev/null differ
diff --git a/docs/static/gui/link.png b/docs/static/gui/link.png
deleted file mode 100644
index d6b6745b..00000000
Binary files a/docs/static/gui/link.png and /dev/null differ
diff --git a/docs/static/gui/marker.png b/docs/static/gui/marker.png
deleted file mode 100644
index 8c60bacb..00000000
Binary files a/docs/static/gui/marker.png and /dev/null differ
diff --git a/docs/static/gui/mdr.png b/docs/static/gui/mdr.png
deleted file mode 100644
index b0678ee7..00000000
Binary files a/docs/static/gui/mdr.png and /dev/null differ
diff --git a/docs/static/gui/oval.png b/docs/static/gui/oval.png
deleted file mode 100644
index 1babf1b7..00000000
Binary files a/docs/static/gui/oval.png and /dev/null differ
diff --git a/docs/static/gui/pc.png b/docs/static/gui/pc.png
deleted file mode 100644
index 3f587e70..00000000
Binary files a/docs/static/gui/pc.png and /dev/null differ
diff --git a/docs/static/gui/rectangle.png b/docs/static/gui/rectangle.png
deleted file mode 100644
index ca6c8c06..00000000
Binary files a/docs/static/gui/rectangle.png and /dev/null differ
diff --git a/docs/static/gui/rj45.png b/docs/static/gui/rj45.png
deleted file mode 100644
index c9d87cfd..00000000
Binary files a/docs/static/gui/rj45.png and /dev/null differ
diff --git a/docs/static/gui/router.png b/docs/static/gui/router.png
deleted file mode 100644
index 1de5014a..00000000
Binary files a/docs/static/gui/router.png and /dev/null differ
diff --git a/docs/static/gui/run.png b/docs/static/gui/run.png
deleted file mode 100644
index a39a997f..00000000
Binary files a/docs/static/gui/run.png and /dev/null differ
diff --git a/docs/static/gui/select.png b/docs/static/gui/select.png
deleted file mode 100644
index 04e18891..00000000
Binary files a/docs/static/gui/select.png and /dev/null differ
diff --git a/docs/static/gui/start.png b/docs/static/gui/start.png
deleted file mode 100644
index 719f4cd9..00000000
Binary files a/docs/static/gui/start.png and /dev/null differ
diff --git a/docs/static/gui/stop.png b/docs/static/gui/stop.png
deleted file mode 100644
index 1e87c929..00000000
Binary files a/docs/static/gui/stop.png and /dev/null differ
diff --git a/docs/static/gui/text.png b/docs/static/gui/text.png
deleted file mode 100644
index 14a85dc0..00000000
Binary files a/docs/static/gui/text.png and /dev/null differ
diff --git a/docs/static/gui/tunnel.png b/docs/static/gui/tunnel.png
deleted file mode 100644
index 2871b74f..00000000
Binary files a/docs/static/gui/tunnel.png and /dev/null differ
diff --git a/docs/static/gui/wlan.png b/docs/static/gui/wlan.png
deleted file mode 100644
index db979a09..00000000
Binary files a/docs/static/gui/wlan.png and /dev/null differ
diff --git a/docs/static/single-pc-emane.png b/docs/static/single-pc-emane.png
new file mode 100644
index 00000000..579255b8
Binary files /dev/null and b/docs/static/single-pc-emane.png differ
diff --git a/docs/static/tutorial-common/running-join.png b/docs/static/tutorial-common/running-join.png
deleted file mode 100644
index 30fbcb80..00000000
Binary files a/docs/static/tutorial-common/running-join.png and /dev/null differ
diff --git a/docs/static/tutorial-common/running-open.png b/docs/static/tutorial-common/running-open.png
deleted file mode 100644
index 7e3e722c..00000000
Binary files a/docs/static/tutorial-common/running-open.png and /dev/null differ
diff --git a/docs/static/tutorial1/link-config-dialog.png b/docs/static/tutorial1/link-config-dialog.png
deleted file mode 100644
index 73d4ed2d..00000000
Binary files a/docs/static/tutorial1/link-config-dialog.png and /dev/null differ
diff --git a/docs/static/tutorial1/link-config.png b/docs/static/tutorial1/link-config.png
deleted file mode 100644
index 35f45327..00000000
Binary files a/docs/static/tutorial1/link-config.png and /dev/null differ
diff --git a/docs/static/tutorial1/scenario.png b/docs/static/tutorial1/scenario.png
deleted file mode 100644
index c1a2dfc7..00000000
Binary files a/docs/static/tutorial1/scenario.png and /dev/null differ
diff --git a/docs/static/tutorial2/wireless-config-delay.png b/docs/static/tutorial2/wireless-config-delay.png
deleted file mode 100644
index b375af76..00000000
Binary files a/docs/static/tutorial2/wireless-config-delay.png and /dev/null differ
diff --git a/docs/static/tutorial2/wireless-configuration.png b/docs/static/tutorial2/wireless-configuration.png
deleted file mode 100644
index 9b87959c..00000000
Binary files a/docs/static/tutorial2/wireless-configuration.png and /dev/null differ
diff --git a/docs/static/tutorial2/wireless.png b/docs/static/tutorial2/wireless.png
deleted file mode 100644
index 8543117d..00000000
Binary files a/docs/static/tutorial2/wireless.png and /dev/null differ
diff --git a/docs/static/tutorial3/mobility-script.png b/docs/static/tutorial3/mobility-script.png
deleted file mode 100644
index 6f32e5b1..00000000
Binary files a/docs/static/tutorial3/mobility-script.png and /dev/null differ
diff --git a/docs/static/tutorial3/motion_continued_breaks_link.png b/docs/static/tutorial3/motion_continued_breaks_link.png
deleted file mode 100644
index cc1f5dcd..00000000
Binary files a/docs/static/tutorial3/motion_continued_breaks_link.png and /dev/null differ
diff --git a/docs/static/tutorial3/motion_from_ns2_file.png b/docs/static/tutorial3/motion_from_ns2_file.png
deleted file mode 100644
index 704cc1d9..00000000
Binary files a/docs/static/tutorial3/motion_from_ns2_file.png and /dev/null differ
diff --git a/docs/static/tutorial3/move-n2.png b/docs/static/tutorial3/move-n2.png
deleted file mode 100644
index befcd4b0..00000000
Binary files a/docs/static/tutorial3/move-n2.png and /dev/null differ
diff --git a/docs/static/tutorial5/VM-network-settings.png b/docs/static/tutorial5/VM-network-settings.png
deleted file mode 100644
index 5d47738e..00000000
Binary files a/docs/static/tutorial5/VM-network-settings.png and /dev/null differ
diff --git a/docs/static/tutorial5/configure-the-rj45.png b/docs/static/tutorial5/configure-the-rj45.png
deleted file mode 100644
index 0e2b8f8b..00000000
Binary files a/docs/static/tutorial5/configure-the-rj45.png and /dev/null differ
diff --git a/docs/static/tutorial5/rj45-connector.png b/docs/static/tutorial5/rj45-connector.png
deleted file mode 100644
index 8c8e86ef..00000000
Binary files a/docs/static/tutorial5/rj45-connector.png and /dev/null differ
diff --git a/docs/static/tutorial5/rj45-unassigned.png b/docs/static/tutorial5/rj45-unassigned.png
deleted file mode 100644
index eda4a3b6..00000000
Binary files a/docs/static/tutorial5/rj45-unassigned.png and /dev/null differ
diff --git a/docs/static/tutorial6/configure-icon.png b/docs/static/tutorial6/configure-icon.png
deleted file mode 100644
index 52a9e2e8..00000000
Binary files a/docs/static/tutorial6/configure-icon.png and /dev/null differ
diff --git a/docs/static/tutorial6/create-nodes.png b/docs/static/tutorial6/create-nodes.png
deleted file mode 100644
index 38257e24..00000000
Binary files a/docs/static/tutorial6/create-nodes.png and /dev/null differ
diff --git a/docs/static/tutorial6/hidden-nodes.png b/docs/static/tutorial6/hidden-nodes.png
deleted file mode 100644
index 604829dd..00000000
Binary files a/docs/static/tutorial6/hidden-nodes.png and /dev/null differ
diff --git a/docs/static/tutorial6/linked-nodes.png b/docs/static/tutorial6/linked-nodes.png
deleted file mode 100644
index 8e75007e..00000000
Binary files a/docs/static/tutorial6/linked-nodes.png and /dev/null differ
diff --git a/docs/static/tutorial6/only-node1-moving.png b/docs/static/tutorial6/only-node1-moving.png
deleted file mode 100644
index 01ac2ebd..00000000
Binary files a/docs/static/tutorial6/only-node1-moving.png and /dev/null differ
diff --git a/docs/static/tutorial6/scenario-with-motion.png b/docs/static/tutorial6/scenario-with-motion.png
deleted file mode 100644
index e30e781c..00000000
Binary files a/docs/static/tutorial6/scenario-with-motion.png and /dev/null differ
diff --git a/docs/static/tutorial6/scenario-with-terrain.png b/docs/static/tutorial6/scenario-with-terrain.png
deleted file mode 100644
index db424e9b..00000000
Binary files a/docs/static/tutorial6/scenario-with-terrain.png and /dev/null differ
diff --git a/docs/static/tutorial6/select-wallpaper.png b/docs/static/tutorial6/select-wallpaper.png
deleted file mode 100644
index 41d40f57..00000000
Binary files a/docs/static/tutorial6/select-wallpaper.png and /dev/null differ
diff --git a/docs/static/tutorial6/wlan-links.png b/docs/static/tutorial6/wlan-links.png
deleted file mode 100644
index ab6c152d..00000000
Binary files a/docs/static/tutorial6/wlan-links.png and /dev/null differ
diff --git a/docs/static/tutorial7/scenario.png b/docs/static/tutorial7/scenario.png
deleted file mode 100644
index 1c677aa3..00000000
Binary files a/docs/static/tutorial7/scenario.png and /dev/null differ
diff --git a/docs/static/workflow.png b/docs/static/workflow.png
deleted file mode 100644
index 35613983..00000000
Binary files a/docs/static/workflow.png and /dev/null differ
diff --git a/docs/tutorials/common/grpc.md b/docs/tutorials/common/grpc.md
deleted file mode 100644
index 2a85d7c8..00000000
--- a/docs/tutorials/common/grpc.md
+++ /dev/null
@@ -1,22 +0,0 @@
-## gRPC Python Scripts
-
-You can also run the same steps above, using the provided gRPC script versions of scenarios.
-Below are the steps to run and join one of these scenario, then you can continue with
-the remaining steps of a given section.
-
-1. Make sure the CORE daemon is running a terminal, if not already
- ``` shell
- sudop core-daemon
- ```
-2. From another terminal run the tutorial python script, which will create a session to join
- ``` shell
- /opt/core/venv/bin/python scenario.py
- ```
-3. In another terminal run the CORE GUI
- ``` shell
- core-gui
- ```
-4. You will be presented with sessions to join, select the one created by the script
-
-
-
diff --git a/docs/tutorials/overview.md b/docs/tutorials/overview.md
deleted file mode 100644
index 6ec0d275..00000000
--- a/docs/tutorials/overview.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# CORE Tutorials
-
-These tutorials will cover various use cases within CORE. These
-tutorials will provide example python, gRPC, XML, and related files, as well
-as an explanation for their usage and purpose.
-
-## Checklist
-
-These are the items you should become familiar with for running all the tutorials below.
-
-* [Install CORE](../install.md)
-* [Tutorial Setup](setup.md)
-
-## Tutorials
-
-* [Tutorial 1 - Wired Network](tutorial1.md)
- * Covers interactions when using a simple 2 node wired network
-* [Tutorial 2 - Wireless Network](tutorial2.md)
- * Covers interactions when using a simple 3 node wireless network
-* [Tutorial 3 - Basic Mobility](tutorial3.md)
- * Covers mobility interactions when using a simple 3 node wireless network
-* [Tutorial 4 - Tests](tutorial4.md)
- * Covers automating scenarios as tests to validate software
-* [Tutorial 5 - RJ45 Node](tutorial5.md)
- * Covers using the RJ45 node to connect a Windows OS
-* [Tutorial 6 - Improve Visuals](tutorial6.md)
- * Covers changing the look of a scenario within the CORE GUI
-* [Tutorial 7 - EMANE](tutorial7.md)
- * Covers using EMANE within CORE for higher fidelity RF networks
diff --git a/docs/tutorials/setup.md b/docs/tutorials/setup.md
deleted file mode 100644
index 858b0f1d..00000000
--- a/docs/tutorials/setup.md
+++ /dev/null
@@ -1,82 +0,0 @@
-# Tutorial Setup
-
-## Setup for CORE
-
-We assume the prior installation of CORE, using a virtual environment. You can
-then adjust your PATH and add an alias to help more conveniently run CORE
-commands.
-
-This can be setup in your **.bashrc**
-
-```shell
-export PATH=$PATH:/opt/core/venv/bin
-alias sudop='sudo env PATH=$PATH'
-```
-
-## Setup for Chat App
-
-There is a simple TCP chat app provided as example software to use and run within
-the tutorials provided.
-
-### Installation
-
-The following will install chatapp and its scripts into **/usr/local**, which you
-may need to add to PATH within node to be able to use command directly.
-
-``` shell
-sudo python3 -m pip install .
-```
-
-!!! note
-
- Some Linux distros will not have **/usr/local** in their PATH and you
- will need to compensate.
-
-``` shell
-export PATH=$PATH:/usr/local
-```
-
-### Running the Server
-
-The server will print and log connected clients and their messages.
-
-``` shell
-usage: chatapp-server [-h] [-a ADDRESS] [-p PORT]
-
-chat app server
-
-optional arguments:
- -h, --help show this help message and exit
- -a ADDRESS, --address ADDRESS
- address to listen on (default: )
- -p PORT, --port PORT port to listen on (default: 9001)
-```
-
-### Running the Client
-
-The client will print and log messages from other clients and their join/leave status.
-
-``` shell
-usage: chatapp-client [-h] -a ADDRESS [-p PORT]
-
-chat app client
-
-optional arguments:
- -h, --help show this help message and exit
- -a ADDRESS, --address ADDRESS
- address to listen on (default: None)
- -p PORT, --port PORT port to listen on (default: 9001)
-```
-
-### Installing the Chat App Service
-
-1. You will first need to edit **/etc/core/core.conf** to update the config
- service path to pick up your service
- ``` shell
- custom_config_services_dir =
- ```
-2. Then you will need to copy/move **chatapp/chatapp_service.py** to the directory
- configured above
-3. Then you will need to restart the **core-daemon** to pick up this new service
-4. Now the service will be an available option under the group **ChatApp** with
- the name **ChatApp Server**
diff --git a/docs/tutorials/tutorial1.md b/docs/tutorials/tutorial1.md
deleted file mode 100644
index 7bda7e7f..00000000
--- a/docs/tutorials/tutorial1.md
+++ /dev/null
@@ -1,252 +0,0 @@
-# Tutorial 1 - Wired Network
-
-## Overview
-
-This tutorial will cover some use cases when using a wired 2 node
-scenario in CORE.
-
-
-
-
-
-## Files
-
-Below is the list of files used for this tutorial.
-
-* 2 node wired scenario
- * scenario.xml
- * scenario.py
-* 2 node wired scenario, with **n1** running the "Chat App Server" service
- * scenario_service.xml
- * scenario_service.py
-
-## Running this Tutorial
-
-This section covers interactions that can be carried out for this scenario.
-
-Our scenario has the following nodes and addresses:
-
-* n1 - 10.0.0.20
-* n2 - 10.0.0.21
-
-All usages below assume a clean scenario start.
-
-### Using Ping
-
-Using the command line utility **ping** can be a good way to verify connectivity
-between nodes in CORE.
-
-* Make sure the CORE daemon is running a terminal, if not already
- ``` shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ``` shell
- core-gui
- ```
-* In the GUI menu bar select **File->Open...**, then navigate to and select **scenario.xml**
-
-
-
-* You can now click on the **Start Session** button to run the scenario
-
-
-
-* Open a terminal on **n1** by double clicking it in the GUI
-* Run the following in **n1** terminal
- ``` shell
- ping -c 3 10.0.0.21
- ```
-* You should see the following output
- ``` shell
- PING 10.0.0.21 (10.0.0.21) 56(84) bytes of data.
- 64 bytes from 10.0.0.21: icmp_seq=1 ttl=64 time=0.085 ms
- 64 bytes from 10.0.0.21: icmp_seq=2 ttl=64 time=0.079 ms
- 64 bytes from 10.0.0.21: icmp_seq=3 ttl=64 time=0.072 ms
-
- --- 10.0.0.21 ping statistics ---
- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms
- rtt min/avg/max/mdev = 0.072/0.078/0.085/0.011 ms
- ```
-
-### Using Tcpdump
-
-Using **tcpdump** can be very beneficial for examining a network. You can verify
-traffic being sent/received among many other uses.
-
-* Make sure the CORE daemon is running a terminal, if not already
- ``` shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ``` shell
- core-gui
- ```
-* In the GUI menu bar select **File->Open...**, then navigate to and select **scenario.xml**
-
-
-
-* You can now click on the **Start Session** button to run the scenario
-
-
-
-* Open a terminal on **n1** by double clicking it in the GUI
-* Open a terminal on **n2** by double clicking it in the GUI
-* Run the following in **n2** terminal
- ``` shell
- tcpdump -lenni eth0
- ```
-* Run the following in **n1** terminal
- ``` shell
- ping -c 1 10.0.0.21
- ```
-* You should see the following in **n2** terminal
- ``` shell
- tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
- listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
- 10:23:04.685292 00:00:00:aa:00:00 > 00:00:00:aa:00:01, ethertype IPv4 (0x0800), length 98: 10.0.0.20 > 10.0.0.21: ICMP echo request, id 67, seq 1, length 64
- 10:23:04.685329 00:00:00:aa:00:01 > 00:00:00:aa:00:00, ethertype IPv4 (0x0800), length 98: 10.0.0.21 > 10.0.0.20: ICMP echo reply, id 67, seq 1, length 64
- ```
-
-### Editing a Link
-
-You can edit links between nodes in CORE to modify loss, delay, bandwidth, and more. This can be
-beneficial for understanding how software will behave in adverse conditions.
-
-* Make sure the CORE daemon is running a terminal, if not already
- ``` shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ``` shell
- core-gui
- ```
-* In the GUI menu bar select **File->Open...**, then navigate to and select **scenario.xml**
-
-
-
-* You can now click on the **Start Session** button to run the scenario
-
-
-
-* Right click the link between **n1** and **n2**
-* Select **Configure**
-
-
-
-* Update the loss to **25**
-
-
-
-* Open a terminal on **n1** by double clicking it in the GUI
-* Run the following in **n1** terminal
- ``` shell
- ping -c 10 10.0.0.21
- ```
-* You should see something similar for the summary output, reflecting the change in loss
- ``` shell
- --- 10.0.0.21 ping statistics ---
- 10 packets transmitted, 6 received, 40% packet loss, time 9000ms
- rtt min/avg/max/mdev = 0.077/0.093/0.108/0.016 ms
- ```
-* Remember that the loss above is compounded, since a ping and the loss applied occurs in both directions
-
-### Running Software
-
-We will now leverage the installed Chat App software to stand up a server and client
-within the nodes of our scenario.
-
-* Make sure the CORE daemon is running a terminal, if not already
- ``` shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ``` shell
- core-gui
- ```
-* In the GUI menu bar select **File->Open...**, then navigate to and select **scenario.xml**
-
-
-
-* You can now click on the **Start Session** button to run the scenario
-
-
-
-* Open a terminal on **n1** by double clicking it in the GUI
-* Run the following in **n1** terminal
- ``` shell
- export PATH=$PATH:/usr/local/bin
- chatapp-server
- ```
-* Open a terminal on **n2** by double clicking it in the GUI
-* Run the following in **n2** terminal
- ``` shell
- export PATH=$PATH:/usr/local/bin
- chatapp-client -a 10.0.0.20
- ```
-* You will see the following output in **n1** terminal
- ``` shell
- chat server listening on: :9001
- [server] 10.0.0.21:44362 joining
- ```
-* Type the following in **n2** terminal and hit enter
- ``` shell
- hello world
- ```
-* You will see the following output in **n1** terminal
- ``` shell
- chat server listening on: :9001
- [server] 10.0.0.21:44362 joining
- [10.0.0.21:44362] hello world
- ```
-
-### Tailing a Log
-
-In this case we are using the service based scenario. This will automatically start
-and run the Chat App Server on **n1** and log to a file. This case will demonstrate
-using `tail -f` to observe the output of running software.
-
-* Make sure the CORE daemon is running a terminal, if not already
- ``` shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ``` shell
- core-gui
- ```
-* In the GUI menu bar select **File->Open...**, then navigate to and select **scenario_service.xml**
-
-
-
-* You can now click on the **Start Session** button to run the scenario
-
-
-
-* Open a terminal on **n1** by double clicking it in the GUI
-* Run the following in **n1** terminal
- ``` shell
- tail -f chatapp.log
- ```
-* Open a terminal on **n2** by double clicking it in the GUI
-* Run the following in **n2** terminal
- ``` shell
- export PATH=$PATH:/usr/local/bin
- chatapp-client -a 10.0.0.20
- ```
-* You will see the following output in **n1** terminal
- ``` shell
- chat server listening on: :9001
- [server] 10.0.0.21:44362 joining
- ```
-* Type the following in **n2** terminal and hit enter
- ``` shell
- hello world
- ```
-* You will see the following output in **n1** terminal
- ``` shell
- chat server listening on: :9001
- [server] 10.0.0.21:44362 joining
- [10.0.0.21:44362] hello world
- ```
-
---8<-- "tutorials/common/grpc.md"
diff --git a/docs/tutorials/tutorial2.md b/docs/tutorials/tutorial2.md
deleted file mode 100644
index 7b82e04e..00000000
--- a/docs/tutorials/tutorial2.md
+++ /dev/null
@@ -1,145 +0,0 @@
-# Tutorial 2 - Wireless Network
-
-## Overview
-
-This tutorial will cover the use of a 3 node scenario in CORE. Then
-running a chat server on one node and a chat client on the other. The client will
-send a simple message and the server will log receipt of the message.
-
-## Files
-
-Below is the list of files used for this tutorial.
-
-* scenario.xml - 3 node CORE xml scenario file (wireless)
-* scenario.py - 3 node CORE gRPC python script (wireless)
-
-## Running with the XML Scenario File
-
-This section will cover running this sample tutorial using the
-XML scenario file, leveraging an NS2 mobility file.
-
-* Make sure the **core-daemon** is running a terminal
- ```shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ```shell
- core-gui
- ```
-* In the GUI menu bar select **File->Open...**
-* Navigate to and select this tutorials **scenario.xml** file
-* You can now click play to start the session
-
-
-
-* Note that OSPF routing protocol is included in the scenario to provide routes to other nodes, as they are discovered
-* Double click node **n4** to open a terminal and ping node **n2**
- ```shell
- ping -c 2 10.0.0.2
- PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
- 64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=20.2 ms
- 64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=20.2 ms
-
- --- 10.0.0.2 ping statistics ---
- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms
- rtt min/avg/max/mdev = 20.168/20.173/20.178/0.005 ms
- ```
-
-### Configuring Delay
-
-* Right click on the **wlan1** node and select **WLAN Config**, then set delay to 500000
-
-
-
-* Using the open terminal for node **n4**, ping **n2** again, expect about 2 seconds delay
- ```shell
- ping -c 5 10.0.0.2
- 64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=2001 ms
- 64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=2000 ms
- 64 bytes from 10.0.0.2: icmp_seq=3 ttl=63 time=2000 ms
- 64 bytes from 10.0.0.2: icmp_seq=4 ttl=63 time=2000 ms
- 64 bytes from 10.0.0.2: icmp_seq=5 ttl=63 time=2000 ms
-
- --- 10.0.0.2 ping statistics ---
- 5 packets transmitted, 5 received, 0% packet loss, time 4024ms
- rtt min/avg/max/mdev = 2000.176/2000.438/2001.166/0.376 ms, pipe 2
- ```
-
-### Configure Loss
-
-* Right click on the **wlan1** node and select **WLAN Config**, set delay back to 5000 and loss to 10
-
-
-
-* Using the open terminal for node **n4**, ping **n2** again, expect to notice considerable loss
- ```shell
- ping -c 10 10.0.0.2
- PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
- 64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=20.4 ms
- 64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=20.5 ms
- 64 bytes from 10.0.0.2: icmp_seq=3 ttl=63 time=20.2 ms
- 64 bytes from 10.0.0.2: icmp_seq=4 ttl=63 time=20.8 ms
- 64 bytes from 10.0.0.2: icmp_seq=5 ttl=63 time=21.9 ms
- 64 bytes from 10.0.0.2: icmp_seq=8 ttl=63 time=22.7 ms
- 64 bytes from 10.0.0.2: icmp_seq=9 ttl=63 time=22.4 ms
- 64 bytes from 10.0.0.2: icmp_seq=10 ttl=63 time=20.3 ms
-
- --- 10.0.0.2 ping statistics ---
- 10 packets transmitted, 8 received, 20% packet loss, time 9064ms
- rtt min/avg/max/mdev = 20.188/21.143/22.717/0.967 ms
- ```
-* Make sure to set loss back to 0 when done
-
-## Running with the gRPC Python Script
-
-This section will cover running this sample tutorial using the
-gRPC python script and providing mobility over the gRPC interface.
-
-* Make sure the **core-daemon** is running a terminal
- ```shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ```shell
- core-gui
- ```
-* From another terminal run the **scenario.py** script
- ```shell
- /opt/core/venv/bin/python scenario.py
- ```
-* In the GUI dialog box select the session and click connect
-* You will now have joined the already running scenario
-
-
-
-
-
-## Running Software
-
-We will now leverage the installed Chat App software to stand up a server and client
-within the nodes of our scenario. You can use the bases of the running scenario from
-either **scenario.xml** or the **scenario.py** gRPC script.
-
-* In the GUI double click on node **n4**, this will bring up a terminal for this node
-* In the **n4** terminal, run the server
- ```shell
- export PATH=$PATH:/usr/local/bin
- chatapp-server
- ```
-* In the GUI double click on node **n2**, this will bring up a terminal for this node
-* In the **n2** terminal, run the client
- ```shell
- export PATH=$PATH:/usr/local/bin
- chatapp-client -a 10.0.0.4
- ```
-* This will result in **n2** connecting to the server
-* In the **n2** terminal, type a message at the client prompt
- ```shell
- >>hello world
- ```
-* Observe that text typed at client then appears in the terminal of **n4**
- ```shell
- chat server listening on: :9001
- [server] 10.0.0.2:53684 joining
- [10.0.0.2:53684] hello world
- ```
diff --git a/docs/tutorials/tutorial3.md b/docs/tutorials/tutorial3.md
deleted file mode 100644
index eaa2a5e6..00000000
--- a/docs/tutorials/tutorial3.md
+++ /dev/null
@@ -1,155 +0,0 @@
-# Tutorial 3 - Basic Mobility
-
-## Overview
-
-This tutorial will cover using a 3 node scenario in CORE with basic mobility.
-Mobility can be provided from a NS2 file or by including mobility commands in a gRPC script.
-
-## Files
-
-Below is the list of files used for this tutorial.
-
-* movements1.txt - a NS2 mobility input file
-* scenario.xml - 3 node CORE xml scenario file (wireless)
-* scenario.py - 3 node CORE gRPC python script (wireless)
-* printout.py - event listener
-
-## Running with XML file using NS2 Movement
-
-This section will cover running this sample tutorial using the XML scenario
-file, leveraging an NS2 file for mobility.
-
-* Make sure the **core-daemon** is running a terminal
- ```shell
- sudop core-daemon
- ```
-* In another terminal run the GUI
- ```shell
- core-gui
- ```
-* Observe the format of the N2 file, cat movements1.txt. Note that this file was manually developed.
- ```shell
- $node_(1) set X_ 208.1
- $node_(1) set Y_ 211.05
- $node_(1) set Z_ 0
- $ns_ at 0.0 "$node_(1) setdest 208.1 211.05 0.00"
- $node_(2) set X_ 393.1
- $node_(2) set Y_ 223.05
- $node_(2) set Z_ 0
- $ns_ at 0.0 "$node_(2) setdest 393.1 223.05 0.00"
- $node_(4) set X_ 499.1
- $node_(4) set Y_ 186.05
- $node_(4) set Z_ 0
- $ns_ at 0.0 "$node_(4) setdest 499.1 186.05 0.00"
- $ns_ at 1.0 "$node_(1) setdest 190.1 225.05 0.00"
- $ns_ at 1.0 "$node_(2) setdest 393.1 225.05 0.00"
- $ns_ at 1.0 "$node_(4) setdest 515.1 186.05 0.00"
- $ns_ at 2.0 "$node_(1) setdest 175.1 250.05 0.00"
- $ns_ at 2.0 "$node_(2) setdest 393.1 250.05 0.00"
- $ns_ at 2.0 "$node_(4) setdest 530.1 186.05 0.00"
- $ns_ at 3.0 "$node_(1) setdest 160.1 275.05 0.00"
- $ns_ at 3.0 "$node_(2) setdest 393.1 275.05 0.00"
- $ns_ at 3.0 "$node_(4) setdest 530.1 186.05 0.00"
- $ns_ at 4.0 "$node_(1) setdest 160.1 300.05 0.00"
- $ns_ at 4.0 "$node_(2) setdest 393.1 300.05 0.00"
- $ns_ at 4.0 "$node_(4) setdest 550.1 186.05 0.00"
- $ns_ at 5.0 "$node_(1) setdest 160.1 275.05 0.00"
- $ns_ at 5.0 "$node_(2) setdest 393.1 275.05 0.00"
- $ns_ at 5.0 "$node_(4) setdest 530.1 186.05 0.00"
- $ns_ at 6.0 "$node_(1) setdest 175.1 250.05 0.00"
- $ns_ at 6.0 "$node_(2) setdest 393.1 250.05 0.00"
- $ns_ at 6.0 "$node_(4) setdest 515.1 186.05 0.00"
- $ns_ at 7.0 "$node_(1) setdest 190.1 225.05 0.00"
- $ns_ at 7.0 "$node_(2) setdest 393.1 225.05 0.00"
- $ns_ at 7.0 "$node_(4) setdest 499.1 186.05 0.00"
- ```
-* In the GUI menu bar select **File->Open...**, and select this tutorials **scenario.xml** file
-* You can now click play to start the session
-* Select the play button on the Mobility Player to start mobility
-* Observe movement of the nodes
-* Note that OSPF routing protocol is included in the scenario to build routing table so that routes to other nodes are
- known and when the routes are discovered, ping will work
-
-
-
-
-
-## Running with the gRPC Script
-
-This section covers using a gRPC script to create and provide scenario movement.
-
-* Make sure the **core-daemon** is running a terminal
- ```shell
- sudop core-daemon
- ```
-* From another terminal run the **scenario.py** script
- ```shell
- /opt/core/venv/bin/python scenario.py
- ```
-* In another terminal run the GUI
- ```shell
- core-gui
- ```
-* In the GUI dialog box select the session and click connect
-* You will now have joined the already running scenario
-* In the terminal running the **scenario.py**, hit a key to start motion
-
-
-
-* Observe the link between **n3** and **n4** is shown and then as motion continues the link breaks
-
-
-
-
-## Running the Chat App Software
-
-This section covers using one of the above 2 scenarios to run software within
-the nodes.
-
-* In the GUI double click on **n4**, this will bring up a terminal for this node
-* in the **n4** terminal, run the server
- ```shell
- export PATH=$PATH:/usr/local/bin
- chatapp-server
- ```
-* In the GUI double click on **n2**, this will bring up a terminal for this node
-* In the **n2** terminal, run the client
- ```shell
- export PATH=$PATH:/usr/local/bin
- chatapp-client -a 10.0.0.4
- ```
-* This will result in **n2** connecting to the server
-* In the **n2** terminal, type a message at the client prompt and hit enter
- ```shell
- >>hello world
- ```
-* Observe that text typed at client then appears in the server terminal
- ```shell
- chat server listening on: :9001
- [server] 10.0.0.2:53684 joining
- [10.0.0.2:53684] hello world
- ```
-
-## Running Mobility from a Node
-
-This section provides an example for running a script within a node, that
-leverages a control network in CORE for issuing mobility using the gRPC
-API.
-
-* Edit the following line in **/etc/core/core.conf**
- ```shell
- grpcaddress = 0.0.0.0
- ```
-* Start the scenario from the **scenario.xml**
-* From the GUI open **Session -> Options** and set **Control Network** to **172.16.0.0/24**
-* Click to play the scenario
-* Double click on **n2** to get a terminal window
-* From the terminal window for **n2**, run the script
- ```shell
- /opt/core/venv/bin/python move-node2.py
- ```
-* Observe that node 2 moves and continues to move
-
-
-
-
diff --git a/docs/tutorials/tutorial4.md b/docs/tutorials/tutorial4.md
deleted file mode 100644
index 77ac1c94..00000000
--- a/docs/tutorials/tutorial4.md
+++ /dev/null
@@ -1,121 +0,0 @@
-# Tutorial 4 - Tests
-
-## Overview
-
-A use case for CORE would be to help automate integration tests for running
-software within a network. This tutorial covers using CORE with the python
-pytest testing framework. It will show how you can define tests, for different
-use cases to validate software and outcomes within a defined network. Using
-pytest, you would create tests using all the standard pytest functionality.
-Creating a test file, and then defining test functions to run. For these tests,
-we are leveraging the CORE library directly and the API it provides.
-
-Refer to the [pytest documentation](https://docs.pytest.org) for indepth
-information on how to write tests with pytest.
-
-## Files
-
-A directory is used for containing your tests. Within this directory we need a
-**conftest.py**, which pytest will pick up to help define and provide
-test fixtures, which will be leveraged within our tests.
-
-* tests
- * conftest.py - file used by pytest to define fixtures, which can be shared across tests
- * test_ping.py - defines test classes/functions to run
-
-## Test Fixtures
-
-Below are the definitions for fixture you can define to facilitate and make
-creating CORE based tests easier.
-
-The global session fixture creates one **CoreEmu** object for the entire
-test session, yields it for testing, and calls shutdown when everything
-is over.
-
-``` python
-@pytest.fixture(scope="session")
-def global_session():
- core = CoreEmu()
- session = core.create_session()
- session.set_state(EventTypes.CONFIGURATION_STATE)
- yield session
- core.shutdown()
-```
-
-The regular session fixture leverages the global session fixture. It
-will set the correct state for each test case, yield the session for a test,
-and then clear the session after a test finishes to prepare for the next
-test.
-
-``` python
-@pytest.fixture
-def session(global_session):
- global_session.set_state(EventTypes.CONFIGURATION_STATE)
- yield global_session
- global_session.clear()
-```
-
-The ip prefixes fixture help provide a preconfigured convenience for
-creating and assigning interfaces to nodes, when creating your network
-within a test. The address subnet can be whatever you desire.
-
-``` python
-@pytest.fixture(scope="session")
-def ip_prefixes():
- return IpPrefixes(ip4_prefix="10.0.0.0/24")
-```
-
-## Test Functions
-
-Within a pytest test file, you have the freedom to create any kind of
-test you like, but they will all follow a similar formula.
-
-* define a test function that will leverage the session and ip prefixes fixtures
-* then create a network to test, using the session fixture
-* run commands within nodes as desired, to test out your use case
-* validate command result or output for expected behavior to pass or fail
-
-In the test below, we create a simple 2 node wired network and validate
-node1 can ping node2 successfully.
-
-``` python
-def test_success(self, session: Session, ip_prefixes: IpPrefixes):
- # create nodes
- node1 = session.add_node(CoreNode)
- node2 = session.add_node(CoreNode)
-
- # link nodes together
- iface1_data = ip_prefixes.create_iface(node1)
- iface2_data = ip_prefixes.create_iface(node2)
- session.add_link(node1.id, node2.id, iface1_data, iface2_data)
-
- # ping node, expect a successful command
- node1.cmd(f"ping -c 1 {iface2_data.ip4}")
-```
-
-## Install Pytest
-
-Since we are running an automated test within CORE, we will need to install
-pytest within the python interpreter used by CORE.
-
-``` shell
-sudo /opt/core/venv/bin/python -m pip install pytest
-```
-
-## Running Tests
-
-You can run your own or the provided tests, by running the following.
-
-``` shell
-cd
-sudo /opt/core/venv/bin/python -m pytest -v
-```
-
-If you run the provided tests, you would expect to see the two tests
-running and passing.
-
-``` shell
-tests/test_ping.py::TestPing::test_success PASSED [ 50%]
-tests/test_ping.py::TestPing::test_failure PASSED [100%]
-```
-
diff --git a/docs/tutorials/tutorial5.md b/docs/tutorials/tutorial5.md
deleted file mode 100644
index 92337717..00000000
--- a/docs/tutorials/tutorial5.md
+++ /dev/null
@@ -1,168 +0,0 @@
-# Tutorial 5 - RJ45 Node
-
-## Overview
-
-This tutorial will cover connecting CORE VM to a Windows host machine using a RJ45 node.
-
-## Files
-
-Below is the list of files used for this tutorial.
-
-* scenario.xml - the scenario with RJ45 unassigned
-* scenario.py- grpc script to create the RJ45 in simple CORE scenario
-* client_for_windows.py - chat app client modified for windows
-
-## Running with the Saved XML File
-
-This section covers using the saved **scenario.xml** file to get and up and running.
-
-* Configure the Windows host VM to have a bridged network adapter
-
-