docs - added an updated take on running distributed and isolated it to its own higher level page

This commit is contained in:
Blake J. Harnden 2019-06-19 10:31:34 -07:00
parent 5b1c9a6e68
commit 3dac7f096c
3 changed files with 173 additions and 106 deletions

172
docs/distributed.md Normal file
View file

@ -0,0 +1,172 @@
# CORE - Distributed Emulation
* Table of Contents
{:toc}
## Overview
A large emulation scenario can be deployed on multiple emulation servers and
controlled by a single GUI. The GUI, representing the entire topology, can be
run on one of the emulation servers or on a separate machine.
Each machine that will act as an emulation server would ideally have the
same version of CORE installed. It is not important to have the GUI component
but the CORE Python daemon **core-daemon** needs to be installed.
**NOTE: The server that the GUI connects with is referred to as
the master server.**
## Configuring Listen Address
First we need to configure the **core-daemon** on all servers to listen on an
interface over the network. The simplest way would be updating the core
configuration file to listen on all interfaces. Alternatively, configure it to
listen to the specific interface you desire by supplying the correct address.
The **listenaddr** configuration should be set to the address of the interface
that should receive CORE API control commands from the other servers;
setting **listenaddr = 0.0.0.0** causes the Python daemon to listen on all
interfaces. CORE uses TCP port **4038** by default to communicate from the
controlling machine (with GUI) to the emulation servers. Make sure that
firewall rules are configured as necessary to allow this traffic.
```shell
# open configuration file
vi /etc/core/core.conf
# within core.conf
[core-daemon]
listenaddr = 0.0.0.0
```
## Enabling Remote SSH Shells
### Update GUI Terminal Program
**Edit -> Preferences... -> Terminal program:**
Currently recommend setting this to **xterm -e** as the default
**gnome-terminal** will not work.
May need to install xterm if, not already installed.
```shell
sudo apt install xterm
```
### Setup SSH
In order to easily open shells on the emulation servers, the servers should be
running an SSH server, and public key login should be enabled. This is
accomplished by generating an SSH key for your user on all servers being used
for distributed emulation, if you do not already have one. Then copying your
master server public key to the authorized_keys file on all other servers that
will be used to help drive the distributed emulation. When double-clicking on a
node during runtime, instead of opening a local shell, the GUI will attempt to
SSH to the emulation server to run an interactive shell.
You need to have the same user defined on each server, since the user used
for these remote shells is the same user that is running the CORE GUI.
```shell
# install openssh-server
sudo apt install openssh-server
# generate ssh if needed
ssh-keygen -o -t rsa -b 4096
# copy public key to authorized_keys file
ssh-copy-id user@server
# or
scp ~/.ssh/id_rsa.pub username@server:~/.ssh/authorized_keys
```
## Add Emulation Servers in GUI
Within the core-gui navigate to menu option:
**Session -> Emulation servers...**
Within the dialog box presented, add or modify an existing server if present
to use the name, address, and port for the a server you plan to use.
Server configurations are loaded and written to in a configuration file for
the GUI.
**~/.core/servers.conf**
```conf
# name address port
server2 192.168.0.2 4038
```
## Assigning Nodes
The user needs to assign nodes to emulation servers in the scenario. Making no
assignment means the node will be emulated on the master server
In the configuration window of every node, a drop-down box located between
the *Node name* and the *Image* button will select the name of the emulation
server. By default, this menu shows *(none)*, indicating that the node will
be emulated locally on the master. When entering Execute mode, the CORE GUI
will deploy the node on its assigned emulation server.
Another way to assign emulation servers is to select one or more nodes using
the select tool (shift-click to select multiple), and right-click one of the
nodes and choose *Assign to...*.
The **CORE emulation servers** dialog box may also be used to assign nodes to
servers. The assigned server name appears in parenthesis next to the node name.
To assign all nodes to one of the servers, click on the server name and then
the **all nodes** button. Servers that have assigned nodes are shown in blue in
the server list. Another option is to first select a subset of nodes, then open
the **CORE emulation servers** box and use the **selected nodes** button.
**IMPORTANT: Leave the nodes unassigned if they are to be run on the master
server. Do not explicitly assign the nodes to the master server.**
## GUI Visualization
If there is a link between two nodes residing on different servers, the GUI
will draw the link with a dashed line.
## Concerns and Limitations
Wireless nodes, i.e. those connected to a WLAN node, can be assigned to
different emulation servers and participate in the same wireless network
only if an EMANE model is used for the WLAN. The basic range model does
not work across multiple servers due to the Linux bridging and ebtables
rules that are used.
**NOTE: The basic range wireless model does not support distributed emulation,
but EMANE does.**
When nodes are linked across servers **core-daemons** will automatically
create necessary tunnels between the nodes when executed. Care should be taken
to arrange the topology such that the number of tunnels is minimized. The
tunnels carry data between servers to connect nodes as specified in the topology.
These tunnels are created using GRE tunneling, similar to the Tunnel Tool.
### EMANE Issues
EMANE appears to require location events for nodes to be sync'ed across
all EMANE instances for nodes to find each other. Using an EMANE eel file
for your scenario can help clear this up, which might be desired anyway.
* https://github.com/adjacentlink/emane/wiki/EEL-Generator
You can also move nodes within the GUI to help trigger location events from
CORE when the **core.conf** settings below is used. Assuming the nodes
did not find each other by default and you are not using an eel file.
```shell
emane_event_generate = True
```
## Distributed Checklist
1. Install the same version of the CORE daemon on all servers.
1. Set **listenaddr** configuration in all of the server's core.conf files,
then start (or restart) the daemon.
1. Installed and configure public-key SSH access on all servers (if you want to use
double-click shells or Widgets.)
1. Assign nodes to desired servers, empty for master server
1. Press the **Start** button to launch the distributed emulation.

View file

@ -23,6 +23,7 @@ networking scenarios, security studies, and increasing the size of physical test
|[Architecture](architecture.md)|Overview of the architecture|
|[Installation](install.md)|Installing from source, packages, & other dependencies|
|[Using the GUI](usage.md)|Details on the different node types and options in the GUI|
|[Distributed](distributed.md)|Overview and detals for running CORE across multiple servers|
|[Python Scripting](scripting.md)|How to write python scripts for creating a CORE session|
|[gRPC API](grpc.md)|How to enable and use the gRPC API|
|[Node Types](machine.md)|Overview of node types supported within CORE|

View file

@ -728,112 +728,6 @@ pseudo-link will be drawn, representing the link between the two nodes on
different canvases. Double-clicking on the label at the end of the arrow will
jump to the canvas that it links.
Distributed Emulation
---------------------
A large emulation scenario can be deployed on multiple emulation servers and
controlled by a single GUI. The GUI, representing the entire topology, can be
run on one of the emulation servers or on a separate machine. Emulations can be
distributed on Linux.
Each machine that will act as an emulation server needs to have CORE installed.
It is not important to have the GUI component but the CORE Python daemon
**core-daemon** needs to be installed. Set the **listenaddr** line in the
**/etc/core/core.conf** configuration file so that the CORE Python
daemon will respond to commands from other servers:
```shell
### core-daemon configuration options ###
[core-daemon]
pidfile = /var/run/core-daemon.pid
logfile = /var/log/core-daemon.log
listenaddr = 0.0.0.0
```
The **listenaddr** should be set to the address of the interface that should
receive CORE API control commands from the other servers; setting **listenaddr
= 0.0.0.0** causes the Python daemon to listen on all interfaces. CORE uses TCP
port 4038 by default to communicate from the controlling machine (with GUI) to
the emulation servers. Make sure that firewall rules are configured as
necessary to allow this traffic.
In order to easily open shells on the emulation servers, the servers should be
running an SSH server, and public key login should be enabled. This is
accomplished by generating an SSH key for your user if you do not already have
one (use **ssh-keygen -t rsa**), and then copying your public key to the
authorized_keys file on the server (for example, **ssh-copy-id user@server** or
**scp ~/.ssh/id_rsa.pub server:.ssh/authorized_keys**.) When double-clicking on
a node during runtime, instead of opening a local shell, the GUI will attempt
to SSH to the emulation server to run an interactive shell. The user name used
for these remote shells is the same user that is running the CORE GUI.
**HINT: Here is a quick distributed emulation checklist.**
1. Install the CORE daemon on all servers.
2. Configure public-key SSH access to all servers (if you want to use
double-click shells or Widgets.)
3. Set **listenaddr=0.0.0.0** in all of the server's core.conf files,
then start (or restart) the daemon.
4. Select nodes, right-click them, and choose *Assign to* to assign
the servers (add servers through *Session*, *Emulation Servers...*)
5. Press the *Start* button to launch the distributed emulation.
Servers are configured by choosing *Emulation servers...* from the *Session*
menu. Servers parameters are configured in the list below and stored in a
*servers.conf* file for use in different scenarios. The IP address and port of
the server must be specified. The name of each server will be saved in the
topology file as each node's location.
**NOTE:**
The server that the GUI connects with
is referred to as the master server.
The user needs to assign nodes to emulation servers in the scenario. Making no
assignment means the node will be emulated on the master server
In the configuration window of every node, a drop-down box located between
the *Node name* and the *Image* button will select the name of the emulation
server. By default, this menu shows *(none)*, indicating that the node will
be emulated locally on the master. When entering Execute mode, the CORE GUI
will deploy the node on its assigned emulation server.
Another way to assign emulation servers is to select one or more nodes using
the select tool (shift-click to select multiple), and right-click one of the
nodes and choose *Assign to...*.
The *CORE emulation servers* dialog box may also be used to assign nodes to
servers. The assigned server name appears in parenthesis next to the node name.
To assign all nodes to one of the servers, click on the server name and then
the *all nodes* button. Servers that have assigned nodes are shown in blue in
the server list. Another option is to first select a subset of nodes, then open
the *CORE emulation servers* box and use the *selected nodes* button.
**IMPORTANT:**
Leave the nodes unassigned if they are to be run on the master server.
Do not explicitly assign the nodes to the master server.
The emulation server machines should be reachable on the specified port and via
SSH. SSH is used when double-clicking a node to open a shell, the GUI will open
an SSH prompt to that node's emulation server. Public-key authentication should
be configured so that SSH passwords are not needed.
If there is a link between two nodes residing on different servers, the GUI
will draw the link with a dashed line, and automatically create necessary
tunnels between the nodes when executed. Care should be taken to arrange the
topology such that the number of tunnels is minimized. The tunnels carry data
between servers to connect nodes as specified in the topology.
These tunnels are created using GRE tunneling, similar to the Tunnel Tool.
Wireless nodes, i.e. those connected to a WLAN node, can be assigned to
different emulation servers and participate in the same wireless network
only if an
EMANE model is used for the WLAN. The basic range model does not work across multiple servers due
to the Linux bridging and ebtables rules that are used.
**NOTE:**
The basic range wireless model does not support distributed emulation,
but EMANE does.
## Services
CORE uses the concept of services to specify what processes or scripts run on a