From 74ea163e724c7979b4ed61506abab9dd8b8edbad Mon Sep 17 00:00:00 2001 From: bharnden <32446120+bharnden@users.noreply.github.com> Date: Mon, 17 Jun 2019 20:21:43 -0700 Subject: [PATCH 1/5] Update install.md --- docs/install.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/docs/install.md b/docs/install.md index 08cd797d..86b93c34 100644 --- a/docs/install.md +++ b/docs/install.md @@ -46,10 +46,13 @@ Install Path | Description The newly added gRPC API which depends on python library grpcio is not commonly found within system repos. To account for this it would be recommended to install the python dependencies using the **requirements.txt** found in -the latest release. +the latest [CORE Release](https://github.com/coreemu/core/releases). ```shell -sudo pip install -r requirements.txt +# for python 2 +sudo python -m pip install -r requirements.txt +# for python 3 +sudo python3 -m pip install -r requirements.txt ``` ## Ubuntu 19.04 @@ -121,9 +124,9 @@ Ubuntu package defaults to using systemd for running as a service. ```shell # python2 -sudo apt ./core_python_$VERSION_amd64.deb +sudo apt install ./core_python_$VERSION_amd64.deb # python3 -sudo apt ./core_python3_$VERSION_amd64.deb +sudo apt install ./core_python3_$VERSION_amd64.deb ``` Run the CORE GUI as a normal user: From 3dac7f096cf9928fff637e473844b0ea7496fc77 Mon Sep 17 00:00:00 2001 From: "Blake J. Harnden" Date: Wed, 19 Jun 2019 10:31:34 -0700 Subject: [PATCH 2/5] docs - added an updated take on running distributed and isolated it to its own higher level page --- docs/distributed.md | 172 ++++++++++++++++++++++++++++++++++++++++++++ docs/index.md | 1 + docs/usage.md | 106 --------------------------- 3 files changed, 173 insertions(+), 106 deletions(-) create mode 100644 docs/distributed.md diff --git a/docs/distributed.md b/docs/distributed.md new file mode 100644 index 00000000..526cf041 --- /dev/null +++ b/docs/distributed.md @@ -0,0 +1,172 @@ +# CORE - Distributed Emulation + +* Table of Contents +{:toc} + +## Overview + +A large emulation scenario can be deployed on multiple emulation servers and +controlled by a single GUI. The GUI, representing the entire topology, can be +run on one of the emulation servers or on a separate machine. + +Each machine that will act as an emulation server would ideally have the + same version of CORE installed. It is not important to have the GUI component + but the CORE Python daemon **core-daemon** needs to be installed. + +**NOTE: The server that the GUI connects with is referred to as +the master server.** + +## Configuring Listen Address + +First we need to configure the **core-daemon** on all servers to listen on an +interface over the network. The simplest way would be updating the core +configuration file to listen on all interfaces. Alternatively, configure it to +listen to the specific interface you desire by supplying the correct address. + +The **listenaddr** configuration should be set to the address of the interface +that should receive CORE API control commands from the other servers; +setting **listenaddr = 0.0.0.0** causes the Python daemon to listen on all +interfaces. CORE uses TCP port **4038** by default to communicate from the +controlling machine (with GUI) to the emulation servers. Make sure that +firewall rules are configured as necessary to allow this traffic. + +```shell +# open configuration file +vi /etc/core/core.conf + +# within core.conf +[core-daemon] +listenaddr = 0.0.0.0 +``` + +## Enabling Remote SSH Shells + +### Update GUI Terminal Program + +**Edit -> Preferences... -> Terminal program:** + +Currently recommend setting this to **xterm -e** as the default +**gnome-terminal** will not work. + +May need to install xterm if, not already installed. + +```shell +sudo apt install xterm +``` + +### Setup SSH + +In order to easily open shells on the emulation servers, the servers should be +running an SSH server, and public key login should be enabled. This is +accomplished by generating an SSH key for your user on all servers being used +for distributed emulation, if you do not already have one. Then copying your +master server public key to the authorized_keys file on all other servers that +will be used to help drive the distributed emulation. When double-clicking on a +node during runtime, instead of opening a local shell, the GUI will attempt to +SSH to the emulation server to run an interactive shell. + +You need to have the same user defined on each server, since the user used +for these remote shells is the same user that is running the CORE GUI. + +```shell +# install openssh-server +sudo apt install openssh-server + +# generate ssh if needed +ssh-keygen -o -t rsa -b 4096 + +# copy public key to authorized_keys file +ssh-copy-id user@server +# or +scp ~/.ssh/id_rsa.pub username@server:~/.ssh/authorized_keys +``` + +## Add Emulation Servers in GUI + +Within the core-gui navigate to menu option: + +**Session -> Emulation servers...** + +Within the dialog box presented, add or modify an existing server if present +to use the name, address, and port for the a server you plan to use. + +Server configurations are loaded and written to in a configuration file for +the GUI. + +**~/.core/servers.conf** +```conf +# name address port +server2 192.168.0.2 4038 +``` + +## Assigning Nodes + +The user needs to assign nodes to emulation servers in the scenario. Making no +assignment means the node will be emulated on the master server +In the configuration window of every node, a drop-down box located between +the *Node name* and the *Image* button will select the name of the emulation +server. By default, this menu shows *(none)*, indicating that the node will +be emulated locally on the master. When entering Execute mode, the CORE GUI +will deploy the node on its assigned emulation server. + +Another way to assign emulation servers is to select one or more nodes using +the select tool (shift-click to select multiple), and right-click one of the +nodes and choose *Assign to...*. + +The **CORE emulation servers** dialog box may also be used to assign nodes to +servers. The assigned server name appears in parenthesis next to the node name. +To assign all nodes to one of the servers, click on the server name and then +the **all nodes** button. Servers that have assigned nodes are shown in blue in +the server list. Another option is to first select a subset of nodes, then open +the **CORE emulation servers** box and use the **selected nodes** button. + +**IMPORTANT: Leave the nodes unassigned if they are to be run on the master +server. Do not explicitly assign the nodes to the master server.** + +## GUI Visualization + +If there is a link between two nodes residing on different servers, the GUI +will draw the link with a dashed line. + +## Concerns and Limitations + +Wireless nodes, i.e. those connected to a WLAN node, can be assigned to +different emulation servers and participate in the same wireless network +only if an EMANE model is used for the WLAN. The basic range model does +not work across multiple servers due to the Linux bridging and ebtables +rules that are used. + +**NOTE: The basic range wireless model does not support distributed emulation, +but EMANE does.** + +When nodes are linked across servers **core-daemons** will automatically +create necessary tunnels between the nodes when executed. Care should be taken +to arrange the topology such that the number of tunnels is minimized. The +tunnels carry data between servers to connect nodes as specified in the topology. +These tunnels are created using GRE tunneling, similar to the Tunnel Tool. + +### EMANE Issues + +EMANE appears to require location events for nodes to be sync'ed across +all EMANE instances for nodes to find each other. Using an EMANE eel file +for your scenario can help clear this up, which might be desired anyway. + +* https://github.com/adjacentlink/emane/wiki/EEL-Generator + +You can also move nodes within the GUI to help trigger location events from +CORE when the **core.conf** settings below is used. Assuming the nodes +did not find each other by default and you are not using an eel file. + +```shell +emane_event_generate = True +``` + +## Distributed Checklist + +1. Install the same version of the CORE daemon on all servers. +1. Set **listenaddr** configuration in all of the server's core.conf files, +then start (or restart) the daemon. +1. Installed and configure public-key SSH access on all servers (if you want to use +double-click shells or Widgets.) +1. Assign nodes to desired servers, empty for master server +1. Press the **Start** button to launch the distributed emulation. diff --git a/docs/index.md b/docs/index.md index 6d0a0477..fb640347 100644 --- a/docs/index.md +++ b/docs/index.md @@ -23,6 +23,7 @@ networking scenarios, security studies, and increasing the size of physical test |[Architecture](architecture.md)|Overview of the architecture| |[Installation](install.md)|Installing from source, packages, & other dependencies| |[Using the GUI](usage.md)|Details on the different node types and options in the GUI| +|[Distributed](distributed.md)|Overview and detals for running CORE across multiple servers| |[Python Scripting](scripting.md)|How to write python scripts for creating a CORE session| |[gRPC API](grpc.md)|How to enable and use the gRPC API| |[Node Types](machine.md)|Overview of node types supported within CORE| diff --git a/docs/usage.md b/docs/usage.md index fa029e2c..f36e6019 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -728,112 +728,6 @@ pseudo-link will be drawn, representing the link between the two nodes on different canvases. Double-clicking on the label at the end of the arrow will jump to the canvas that it links. -Distributed Emulation ---------------------- - -A large emulation scenario can be deployed on multiple emulation servers and -controlled by a single GUI. The GUI, representing the entire topology, can be -run on one of the emulation servers or on a separate machine. Emulations can be -distributed on Linux. - -Each machine that will act as an emulation server needs to have CORE installed. -It is not important to have the GUI component but the CORE Python daemon -**core-daemon** needs to be installed. Set the **listenaddr** line in the -**/etc/core/core.conf** configuration file so that the CORE Python -daemon will respond to commands from other servers: - -```shell -### core-daemon configuration options ### -[core-daemon] -pidfile = /var/run/core-daemon.pid -logfile = /var/log/core-daemon.log -listenaddr = 0.0.0.0 -``` - - -The **listenaddr** should be set to the address of the interface that should -receive CORE API control commands from the other servers; setting **listenaddr -= 0.0.0.0** causes the Python daemon to listen on all interfaces. CORE uses TCP -port 4038 by default to communicate from the controlling machine (with GUI) to -the emulation servers. Make sure that firewall rules are configured as -necessary to allow this traffic. - -In order to easily open shells on the emulation servers, the servers should be -running an SSH server, and public key login should be enabled. This is -accomplished by generating an SSH key for your user if you do not already have -one (use **ssh-keygen -t rsa**), and then copying your public key to the -authorized_keys file on the server (for example, **ssh-copy-id user@server** or -**scp ~/.ssh/id_rsa.pub server:.ssh/authorized_keys**.) When double-clicking on -a node during runtime, instead of opening a local shell, the GUI will attempt -to SSH to the emulation server to run an interactive shell. The user name used -for these remote shells is the same user that is running the CORE GUI. - -**HINT: Here is a quick distributed emulation checklist.** - -1. Install the CORE daemon on all servers. -2. Configure public-key SSH access to all servers (if you want to use -double-click shells or Widgets.) -3. Set **listenaddr=0.0.0.0** in all of the server's core.conf files, -then start (or restart) the daemon. -4. Select nodes, right-click them, and choose *Assign to* to assign -the servers (add servers through *Session*, *Emulation Servers...*) -5. Press the *Start* button to launch the distributed emulation. - -Servers are configured by choosing *Emulation servers...* from the *Session* -menu. Servers parameters are configured in the list below and stored in a -*servers.conf* file for use in different scenarios. The IP address and port of -the server must be specified. The name of each server will be saved in the -topology file as each node's location. - -**NOTE:** - The server that the GUI connects with - is referred to as the master server. - -The user needs to assign nodes to emulation servers in the scenario. Making no -assignment means the node will be emulated on the master server -In the configuration window of every node, a drop-down box located between -the *Node name* and the *Image* button will select the name of the emulation -server. By default, this menu shows *(none)*, indicating that the node will -be emulated locally on the master. When entering Execute mode, the CORE GUI -will deploy the node on its assigned emulation server. - -Another way to assign emulation servers is to select one or more nodes using -the select tool (shift-click to select multiple), and right-click one of the -nodes and choose *Assign to...*. - -The *CORE emulation servers* dialog box may also be used to assign nodes to -servers. The assigned server name appears in parenthesis next to the node name. -To assign all nodes to one of the servers, click on the server name and then -the *all nodes* button. Servers that have assigned nodes are shown in blue in -the server list. Another option is to first select a subset of nodes, then open -the *CORE emulation servers* box and use the *selected nodes* button. - -**IMPORTANT:** - Leave the nodes unassigned if they are to be run on the master server. - Do not explicitly assign the nodes to the master server. - -The emulation server machines should be reachable on the specified port and via -SSH. SSH is used when double-clicking a node to open a shell, the GUI will open -an SSH prompt to that node's emulation server. Public-key authentication should -be configured so that SSH passwords are not needed. - -If there is a link between two nodes residing on different servers, the GUI -will draw the link with a dashed line, and automatically create necessary -tunnels between the nodes when executed. Care should be taken to arrange the -topology such that the number of tunnels is minimized. The tunnels carry data -between servers to connect nodes as specified in the topology. -These tunnels are created using GRE tunneling, similar to the Tunnel Tool. - -Wireless nodes, i.e. those connected to a WLAN node, can be assigned to -different emulation servers and participate in the same wireless network -only if an -EMANE model is used for the WLAN. The basic range model does not work across multiple servers due -to the Linux bridging and ebtables rules that are used. - -**NOTE:** - The basic range wireless model does not support distributed emulation, - but EMANE does. - ## Services CORE uses the concept of services to specify what processes or scripts run on a From ee6b420c9e846bf799d5c2d14869e0619aee985f Mon Sep 17 00:00:00 2001 From: bharnden <32446120+bharnden@users.noreply.github.com> Date: Wed, 19 Jun 2019 14:07:04 -0700 Subject: [PATCH 3/5] Update distributed.md --- docs/distributed.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/distributed.md b/docs/distributed.md index 526cf041..b251c4ca 100644 --- a/docs/distributed.md +++ b/docs/distributed.md @@ -145,7 +145,15 @@ to arrange the topology such that the number of tunnels is minimized. The tunnels carry data between servers to connect nodes as specified in the topology. These tunnels are created using GRE tunneling, similar to the Tunnel Tool. -### EMANE Issues +### EMANE Configuration and Issues + +EMANE needs to have controlnet configured in **core.conf** in order to startup correctly. +The names before the addresses need to match the servers configured in +**~/.core/servers.conf** previously. + +```shell +controlnet = core1:172.16.1.0/24 core2:172.16.2.0/24 core3:172.16.3.0/24 core4:172.16.4.0/24 core5:172.16.5.0/24 +``` EMANE appears to require location events for nodes to be sync'ed across all EMANE instances for nodes to find each other. Using an EMANE eel file From f9304b0875596c9e15355966430075daa4f61792 Mon Sep 17 00:00:00 2001 From: bharnden <32446120+bharnden@users.noreply.github.com> Date: Fri, 21 Jun 2019 12:51:20 -0700 Subject: [PATCH 4/5] Update install.md fixed missing command to build ospf msdr and added possible dependencies --- docs/install.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/install.md b/docs/install.md index 86b93c34..ac1362d2 100644 --- a/docs/install.md +++ b/docs/install.md @@ -89,9 +89,13 @@ sudo dpkg -i quagga-mr_0.99.21mr2.2_amd64.deb Requires building from source, from the latest nightly snapshot. ```shell +# packages needed beyond what's normally required to build core on ubuntu +sudo apt install libtool libreadline-dev + wget https://downloads.pf.itd.nrl.navy.mil/ospf-manet/nightly_snapshots/quagga-svnsnap.tgz tar xzf quagga-svnsnap.tgz cd quagga +./bootstrap.sh ./configure --enable-user=root --enable-group=root --with-cflags=-ggdb \ --sysconfdir=/usr/local/etc/quagga --enable-vtysh \ --localstatedir=/var/run/quagga From e11ec020ebbd8df0d1c78d4be249de3c87190587 Mon Sep 17 00:00:00 2001 From: bharnden <32446120+bharnden@users.noreply.github.com> Date: Fri, 21 Jun 2019 12:57:32 -0700 Subject: [PATCH 5/5] Update install.md avoid building docs for ospf mdr --- docs/install.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install.md b/docs/install.md index ac1362d2..ae3146af 100644 --- a/docs/install.md +++ b/docs/install.md @@ -96,7 +96,7 @@ wget https://downloads.pf.itd.nrl.navy.mil/ospf-manet/nightly_snapshots/quagga-s tar xzf quagga-svnsnap.tgz cd quagga ./bootstrap.sh -./configure --enable-user=root --enable-group=root --with-cflags=-ggdb \ +./configure --disable-doc --enable-user=root --enable-group=root --with-cflags=-ggdb \ --sysconfdir=/usr/local/etc/quagga --enable-vtysh \ --localstatedir=/var/run/quagga make