Merge pull request #787 from coreemu/enhancement/documentation

Enhancement/documentation
This commit is contained in:
bharnden 2023-04-18 12:12:57 -07:00 committed by GitHub
commit 946d161c11
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
35 changed files with 995 additions and 813 deletions

3
.gitignore vendored
View file

@ -18,6 +18,9 @@ configure~
debian
stamp-h1
# python virtual environments
venv
# generated protobuf files
*_pb2.py
*_pb2_grpc.py

View file

@ -1,25 +1,22 @@
# CORE Architecture
* Table of Contents
{:toc}
## Main Components
* core-daemon
* Manages emulated sessions of nodes and links for a given network
* Nodes are created using Linux namespaces
* Links are created using Linux bridges and virtual ethernet peers
* Packets sent over links are manipulated using traffic control
* Provides gRPC API
* Manages emulated sessions of nodes and links for a given network
* Nodes are created using Linux namespaces
* Links are created using Linux bridges and virtual ethernet peers
* Packets sent over links are manipulated using traffic control
* Provides gRPC API
* core-gui
* GUI and daemon communicate over gRPC API
* Drag and drop creation for nodes and links
* Can launch terminals for emulated nodes in running sessions
* Can save/open scenario files to recreate previous sessions
* GUI and daemon communicate over gRPC API
* Drag and drop creation for nodes and links
* Can launch terminals for emulated nodes in running sessions
* Can save/open scenario files to recreate previous sessions
* vnoded
* Command line utility for creating CORE node namespaces
* Command line utility for creating CORE node namespaces
* vcmd
* Command line utility for sending shell commands to nodes
* Command line utility for sending shell commands to nodes
![](static/architecture.png)
@ -57,5 +54,5 @@ rules.
CORE has been released by Boeing to the open source community under the BSD
license. If you find CORE useful for your work, please contribute back to the
project. Contributions can be as simple as reporting a bug, dropping a line of
encouragement, or can also include submitting patches or maintaining aspects
encouragement, or can also include submitting patches or maintaining aspects
of the tool.

View file

@ -1,7 +1,4 @@
# CORE Config Services
* Table of Contents
{:toc}
# Config Services
## Overview
@ -15,6 +12,7 @@ CORE services are a convenience for creating reusable dynamic scripts
to run on nodes, for carrying out specific task(s).
This boilds down to the following functions:
* generating files the service will use, either directly for commands or for configuration
* command(s) for starting a service
* command(s) for validating a service
@ -81,30 +79,28 @@ introduced to automate tasks.
### Creating New Services
!!! note
The directory base name used in **custom_services_dir** below should
be unique and should not correspond to any existing Python module name.
For example, don't use the name **subprocess** or **services**.
1. Modify the example service shown below
to do what you want. It could generate config/script files, mount per-node
directories, start processes/scripts, etc. Your file can define one or more
classes to be imported. You can create multiple Python files that will be imported.
2. Put these files in a directory such as ~/.coregui/custom_services
Note that the last component of this directory name **myservices** should not
be named something like **services** which conflicts with an existing module.
2. Put these files in a directory such as **~/.coregui/custom_services**.
3. Add a **custom_config_services_dir = ~/.coregui/custom_services** entry to the
/etc/core/core.conf file.
**NOTE:**
The directory name used in **custom_services_dir** should be unique and
should not correspond to
any existing Python module name. For example, don't use the name **subprocess**
or **services**.
4. Restart the CORE daemon (core-daemon). Any import errors (Python syntax)
should be displayed in the terminal (or service log, like journalctl).
5. Start using your custom service on your nodes. You can create a new node
type that uses your service, or change the default services for an existing
node type, or change individual nodes. .
node type, or change individual nodes.
### Example Custom Service
@ -121,6 +117,7 @@ from typing import Dict, List
from core.config import ConfigString, ConfigBool, Configuration
from core.configservice.base import ConfigService, ConfigServiceMode, ShadowDir
# class that subclasses ConfigService
class ExampleService(ConfigService):
# unique name for your service within CORE
@ -129,7 +126,7 @@ class ExampleService(ConfigService):
group: str = "ExampleGroup"
# directories that the service should shadow mount, hiding the system directory
directories: List[str] = [
"/usr/local/core",
"/usr/local/core",
]
# files that this service should generate, defaults to nodes home directory
# or can provide an absolute path to a mounted directory

View file

@ -1,13 +1,10 @@
# CORE Control Network
* Table of Contents
{:toc}
## Overview
The CORE control network allows the virtual nodes to communicate with their
host environment. There are two types: the primary control network and
auxiliary control networks. The primary control network is used mainly for
auxiliary control networks. The primary control network is used mainly for
communicating with the virtual nodes from host machines and for master-slave
communications in a multi-server distributed environment. Auxiliary control
networks have been introduced to for routing namespace hosted emulation
@ -30,15 +27,19 @@ new sessions will use by default. To simultaneously run multiple sessions with
control networks, the session option should be used instead of the *core.conf*
default.
> **NOTE:** If you have a large scenario with more than 253 nodes, use a control
network prefix that allows more than the suggested */24*, such as */23* or
greater.
!!! note
> **NOTE:** Running a session with a control network can fail if a previous
session has set up a control network and the its bridge is still up. Close
the previous session first or wait for it to complete. If unable to, the
*core-daemon* may need to be restarted and the lingering bridge(s) removed
manually.
If you have a large scenario with more than 253 nodes, use a control
network prefix that allows more than the suggested */24*, such as */23* or
greater.
!!! note
Running a session with a control network can fail if a previous
session has set up a control network and the its bridge is still up. Close
the previous session first or wait for it to complete. If unable to, the
**core-daemon** may need to be restarted and the lingering bridge(s) removed
manually.
```shell
# Restart the CORE Daemon
@ -52,11 +53,13 @@ for cb in $ctrlbridges; do
done
```
> **NOTE:** If adjustments to the primary control network configuration made in
*/etc/core/core.conf* do not seem to take affect, check if there is anything
set in the *Session Menu*, the *Options...* dialog. They may need to be
cleared. These per session settings override the defaults in
*/etc/core/core.conf*.
!!! note
If adjustments to the primary control network configuration made in
**/etc/core/core.conf** do not seem to take affect, check if there is anything
set in the *Session Menu*, the *Options...* dialog. They may need to be
cleared. These per session settings override the defaults in
**/etc/core/core.conf**.
## Control Network in Distributed Sessions
@ -102,9 +105,9 @@ argument being the keyword *"shutdown"*.
Starting with EMANE 0.9.2, CORE will run EMANE instances within namespaces.
Since it is advisable to separate the OTA traffic from other traffic, we will
need more than single channel leading out from the namespace. Up to three
auxiliary control networks may be defined. Multiple control networks are set
up in */etc/core/core.conf* file. Lines *controlnet1*, *controlnet2* and
*controlnet3* define the auxiliary networks.
auxiliary control networks may be defined. Multiple control networks are set
up in */etc/core/core.conf* file. Lines *controlnet1*, *controlnet2* and
*controlnet3* define the auxiliary networks.
For example, having the following */etc/core/core.conf*:
@ -114,18 +117,20 @@ controlnet1 = core1:172.18.1.0/24 core2:172.18.2.0/24 core3:172.18.3.0/24
controlnet2 = core1:172.19.1.0/24 core2:172.19.2.0/24 core3:172.19.3.0/24
```
This will activate the primary and two auxiliary control networks and add
This will activate the primary and two auxiliary control networks and add
interfaces *ctrl0*, *ctrl1*, *ctrl2* to each node. One use case would be to
assign *ctrl1* to the OTA manager device and *ctrl2* to the Event Service
device in the EMANE Options dialog box and leave *ctrl0* for CORE control
traffic.
> **NOTE:** *controlnet0* may be used in place of *controlnet* to configure
>the primary control network.
!!! note
*controlnet0* may be used in place of *controlnet* to configure
the primary control network.
Unlike the primary control network, the auxiliary control networks will not
employ tunneling since their primary purpose is for efficiently transporting
multicast EMANE OTA and event traffic. Note that there is no per-session
employ tunneling since their primary purpose is for efficiently transporting
multicast EMANE OTA and event traffic. Note that there is no per-session
configuration for auxiliary control networks.
To extend the auxiliary control networks across a distributed test
@ -139,9 +144,11 @@ controlnetif2 = eth2
controlnetif3 = eth3
```
> **NOTE:** There is no need to assign an interface to the primary control
>network because tunnels are formed between the master and the slaves using IP
>addresses that are provided in *servers.conf*.
!!! note
There is no need to assign an interface to the primary control
network because tunnels are formed between the master and the slaves using IP
addresses that are provided in *servers.conf*.
Shown below is a representative diagram of the configuration above.

View file

@ -1,9 +1,6 @@
# CORE Developer's Guide
* Table of Contents
{:toc}
## Repository Overview
## Overview
The CORE source consists of several programming languages for
historical reasons. Current development focuses on the Python modules and
@ -65,7 +62,7 @@ inv test-mock
## Linux Network Namespace Commands
Linux network namespace containers are often managed using the *Linux Container Tools* or *lxc-tools* package.
The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these
The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these
management utilities, but includes its own set of tools for instantiating and configuring network namespace containers.
This section describes these tools.
@ -100,7 +97,7 @@ vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro
A script named *core-cleanup* is provided to clean up any running CORE emulations. It will attempt to kill any
remaining vnoded processes, kill any EMANE processes, remove the :file:`/tmp/pycore.*` session directories, and remove
any bridges or *nftables* rules. With a *-d* option, it will also kill any running CORE daemon.
any bridges or *nftables* rules. With a *-d* option, it will also kill any running CORE daemon.
### netns command

View file

@ -1,8 +1,5 @@
# CORE - Distributed Emulation
* Table of Contents
{:toc}
## Overview
A large emulation scenario can be deployed on multiple emulation servers and
@ -61,6 +58,7 @@ First the distributed servers must be configured to allow passwordless root
login over SSH.
On distributed server:
```shelll
# install openssh-server
sudo apt install openssh-server
@ -81,6 +79,7 @@ sudo systemctl restart sshd
```
On master server:
```shell
# install package if needed
sudo apt install openssh-client
@ -99,6 +98,7 @@ connect_kwargs: {"key_filename": "/home/user/.ssh/core"}
```
On distributed server:
```shell
# open sshd config
vi /etc/ssh/sshd_config
@ -116,8 +116,9 @@ Make sure the value used below is the absolute path to the file
generated above **~/.ssh/core**"
Add/update the fabric configuration file **/etc/fabric.yml**:
```yaml
connect_kwargs: {"key_filename": "/home/user/.ssh/core"}
connect_kwargs: { "key_filename": "/home/user/.ssh/core" }
```
## Add Emulation Servers in GUI
@ -169,8 +170,10 @@ only if an EMANE model is used for the WLAN. The basic range model does
not work across multiple servers due to the Linux bridging and nftables
rules that are used.
**NOTE: The basic range wireless model does not support distributed emulation,
but EMANE does.**
!!! note
The basic range wireless model does not support distributed emulation,
but EMANE does.
When nodes are linked across servers **core-daemons** will automatically
create necessary tunnels between the nodes when executed. Care should be taken
@ -181,10 +184,10 @@ These tunnels are created using GRE tunneling, similar to the Tunnel Tool.
## Distributed Checklist
1. Install CORE on master server
1. Install distributed CORE package on all servers needed
1. Installed and configure public-key SSH access on all servers (if you want to use
double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands
1. Update CORE configuration as needed
1. Choose the servers that participate in distributed emulation.
1. Assign nodes to desired servers, empty for master server.
1. Press the **Start** button to launch the distributed emulation.
2. Install distributed CORE package on all servers needed
3. Installed and configure public-key SSH access on all servers (if you want to use
double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands
4. Update CORE configuration as needed
5. Choose the servers that participate in distributed emulation.
6. Assign nodes to desired servers, empty for master server.
7. Press the **Start** button to launch the distributed emulation.

View file

@ -15,7 +15,6 @@ sudo apt install docker.io
### RHEL Systems
## Configuration
Custom configuration required to avoid iptable rules being added and removing
@ -26,8 +25,8 @@ Place the file below in **/etc/docker/docker.json**
```json
{
"bridge": "none",
"iptables": false
"bridge": "none",
"iptables": false
}
```
@ -53,6 +52,7 @@ Images used by Docker nodes in CORE need to have networking tools installed for
CORE to automate setup and configuration of the network within the container.
Example Dockerfile:
```
FROM ubuntu:latest
RUN apt-get update
@ -60,6 +60,7 @@ RUN apt-get install -y iproute2 ethtool
```
Build image:
```shell
sudo docker build -t <name> .
```

View file

@ -1,7 +1,4 @@
# CORE/EMANE
* Table of Contents
{:toc}
# EMANE (Extendable Mobile Ad-hoc Network Emulator)
## What is EMANE?
@ -31,7 +28,7 @@ and instantiates one EMANE process in the namespace. The EMANE process binds a
user space socket to the TAP device for sending and receiving data from CORE.
An EMANE instance sends and receives OTA (Over-The-Air) traffic to and from
other EMANE instances via a control port (e.g. *ctrl0*, *ctrl1*). It also
other EMANE instances via a control port (e.g. *ctrl0*, *ctrl1*). It also
sends and receives Events to and from the Event Service using the same or a
different control port. EMANE models are configured through the GUI's
configuration dialog. A corresponding EmaneModel Python class is sub-classed
@ -60,7 +57,9 @@ You can find more detailed tutorials and examples at the
Every topic below assumes CORE, EMANE, and OSPF MDR have been installed.
> **WARNING:** demo files will be found within the new `core-gui`
!!! info
Demo files will be found within the `core-gui` **~/.coregui/xmls** directory
| Topic | Model | Description |
|--------------------------------------|---------|-----------------------------------------------------------|
@ -92,8 +91,10 @@ If you have an EMANE event generator (e.g. mobility or pathloss scripts) and
want to have CORE subscribe to EMANE location events, set the following line
in the **core.conf** configuration file.
> **NOTE:** Do not set this option to True if you want to manually drag nodes around
on the canvas to update their location in EMANE.
!!! note
Do not set this option to True if you want to manually drag nodes around
on the canvas to update their location in EMANE.
```shell
emane_event_monitor = True
@ -104,6 +105,7 @@ prefix will place the DTD files in **/usr/local/share/emane/dtd** while CORE
expects them in **/usr/share/emane/dtd**.
Update the EMANE prefix configuration to resolve this problem.
```shell
emane_prefix = /usr/local
```
@ -116,6 +118,7 @@ placed within the path defined by **emane_models_dir** in the CORE
configuration file. This path cannot end in **/emane**.
Here is an example model with documentation describing functionality:
```python
"""
Example custom emane model.
@ -210,7 +213,7 @@ The EMANE models should be listed here for selection. (You may need to restart t
CORE daemon if it was running prior to installing the EMANE Python bindings.)
When an EMANE model is selected, you can click on the models option button
causing the GUI to query the CORE daemon for configuration items.
causing the GUI to query the CORE daemon for configuration items.
Each model will have different parameters, refer to the
EMANE documentation for an explanation of each item. The defaults values are
presented in the dialog. Clicking *Apply* and *Apply* again will store the
@ -220,7 +223,7 @@ The RF-PIPE and IEEE 802.11abg models use a Universal PHY that supports
geographic location information for determining pathloss between nodes. A
default latitude and longitude location is provided by CORE and this
location-based pathloss is enabled by default; this is the *pathloss mode*
setting for the Universal PHY. Moving a node on the canvas while the
setting for the Universal PHY. Moving a node on the canvas while the
emulation is running generates location events for EMANE. To view or change
the geographic location or scale of the canvas use the *Canvas Size and Scale*
dialog available from the *Canvas* menu.
@ -237,7 +240,7 @@ to be created in the virtual nodes that are linked to the EMANE WLAN. These
devices appear with interface names such as eth0, eth1, etc. The EMANE processes
should now be running in each namespace.
To view the configuration generated by CORE, look in the */tmp/pycore.nnnnn/* session
To view the configuration generated by CORE, look in the */tmp/pycore.nnnnn/* session
directory to find the generated EMANE xml files. One easy way to view
this information is by double-clicking one of the virtual nodes and listing the files
in the shell.
@ -279,14 +282,15 @@ being used, along with changing any configuration setting from their defaults.
![](static/emane-configuration.png)
> **NOTE:** Here is a quick checklist for distributed emulation with EMANE.
!!! note
1. Follow the steps outlined for normal CORE.
2. Assign nodes to desired servers
3. Synchronize your machine's clocks prior to starting the emulation,
using *ntp* or *ptp*. Some EMANE models are sensitive to timing.
4. Press the *Start* button to launch the distributed emulation.
Here is a quick checklist for distributed emulation with EMANE.
1. Follow the steps outlined for normal CORE.
2. Assign nodes to desired servers
3. Synchronize your machine's clocks prior to starting the emulation,
using *ntp* or *ptp*. Some EMANE models are sensitive to timing.
4. Press the *Start* button to launch the distributed emulation.
Now when the Start button is used to instantiate the emulation, the local CORE
daemon will connect to other emulation servers that have been assigned

View file

@ -1,8 +1,7 @@
# EMANE Antenna Profiles
* Table of Contents
{:toc}
## Overview
Introduction to using the EMANE antenna profile in CORE, based on the example
EMANE Demo linked below.
@ -10,340 +9,348 @@ EMANE Demo linked below.
for more specifics.
## Demo Setup
We will need to create some files in advance of starting this session.
Create directory to place antenna profile files.
```shell
mkdir /tmp/emane
```
Create `/tmp/emane/antennaprofile.xml` with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE profiles SYSTEM "file:///usr/share/emane/dtd/antennaprofile.dtd">
<profiles>
<profile id="1"
antennapatternuri="/tmp/emane/antenna30dsector.xml"
blockagepatternuri="/tmp/emane/blockageaft.xml">
<placement north="0" east="0" up="0"/>
</profile>
<profile id="2"
antennapatternuri="/tmp/emane/antenna30dsector.xml">
<placement north="0" east="0" up="0"/>
</profile>
<profile id="1"
antennapatternuri="/tmp/emane/antenna30dsector.xml"
blockagepatternuri="/tmp/emane/blockageaft.xml">
<placement north="0" east="0" up="0"/>
</profile>
<profile id="2"
antennapatternuri="/tmp/emane/antenna30dsector.xml">
<placement north="0" east="0" up="0"/>
</profile>
</profiles>
```
Create `/tmp/emane/antenna30dsector.xml` with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE antennaprofile SYSTEM "file:///usr/share/emane/dtd/antennaprofile.dtd">
<!-- 30degree sector antenna pattern with main beam at +6dB and gain decreasing by 3dB every 5 degrees in elevation or bearing.-->
<antennaprofile>
<antennapattern>
<elevation min='-90' max='-16'>
<bearing min='0' max='359'>
<gain value='-200'/>
</bearing>
</elevation>
<elevation min='-15' max='-11'>
<bearing min='0' max='5'>
<gain value='0'/>
</bearing>
<bearing min='6' max='10'>
<gain value='-3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-6'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-6'/>
</bearing>
<bearing min='350' max='354'>
<gain value='-3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='0'/>
</bearing>
</elevation>
<elevation min='-10' max='-6'>
<bearing min='0' max='5'>
<gain value='3'/>
</bearing>
<bearing min='6' max='10'>
<gain value='0'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-3'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-3'/>
</bearing>
<bearing min='350' max='354'>
<gain value='0'/>
</bearing>
<bearing min='355' max='359'>
<gain value='3'/>
</bearing>
</elevation>
<elevation min='-5' max='-1'>
<bearing min='0' max='5'>
<gain value='6'/>
</bearing>
<bearing min='6' max='10'>
<gain value='3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='0'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='0'/>
</bearing>
<bearing min='350' max='354'>
<gain value='3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='6'/>
</bearing>
</elevation>
<elevation min='0' max='5'>
<bearing min='0' max='5'>
<gain value='6'/>
</bearing>
<bearing min='6' max='10'>
<gain value='3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='0'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='0'/>
</bearing>
<bearing min='350' max='354'>
<gain value='3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='6'/>
</bearing>
</elevation>
<elevation min='6' max='10'>
<bearing min='0' max='5'>
<gain value='3'/>
</bearing>
<bearing min='6' max='10'>
<gain value='0'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-3'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-3'/>
</bearing>
<bearing min='350' max='354'>
<gain value='0'/>
</bearing>
<bearing min='355' max='359'>
<gain value='3'/>
</bearing>
</elevation>
<elevation min='11' max='15'>
<bearing min='0' max='5'>
<gain value='0'/>
</bearing>
<bearing min='6' max='10'>
<gain value='-3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-6'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-6'/>
</bearing>
<bearing min='350' max='354'>
<gain value='-3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='0'/>
</bearing>
</elevation>
<elevation min='16' max='90'>
<bearing min='0' max='359'>
<gain value='-200'/>
</bearing>
</elevation>
</antennapattern>
<antennapattern>
<elevation min='-90' max='-16'>
<bearing min='0' max='359'>
<gain value='-200'/>
</bearing>
</elevation>
<elevation min='-15' max='-11'>
<bearing min='0' max='5'>
<gain value='0'/>
</bearing>
<bearing min='6' max='10'>
<gain value='-3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-6'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-6'/>
</bearing>
<bearing min='350' max='354'>
<gain value='-3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='0'/>
</bearing>
</elevation>
<elevation min='-10' max='-6'>
<bearing min='0' max='5'>
<gain value='3'/>
</bearing>
<bearing min='6' max='10'>
<gain value='0'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-3'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-3'/>
</bearing>
<bearing min='350' max='354'>
<gain value='0'/>
</bearing>
<bearing min='355' max='359'>
<gain value='3'/>
</bearing>
</elevation>
<elevation min='-5' max='-1'>
<bearing min='0' max='5'>
<gain value='6'/>
</bearing>
<bearing min='6' max='10'>
<gain value='3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='0'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='0'/>
</bearing>
<bearing min='350' max='354'>
<gain value='3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='6'/>
</bearing>
</elevation>
<elevation min='0' max='5'>
<bearing min='0' max='5'>
<gain value='6'/>
</bearing>
<bearing min='6' max='10'>
<gain value='3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='0'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='0'/>
</bearing>
<bearing min='350' max='354'>
<gain value='3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='6'/>
</bearing>
</elevation>
<elevation min='6' max='10'>
<bearing min='0' max='5'>
<gain value='3'/>
</bearing>
<bearing min='6' max='10'>
<gain value='0'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-3'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-3'/>
</bearing>
<bearing min='350' max='354'>
<gain value='0'/>
</bearing>
<bearing min='355' max='359'>
<gain value='3'/>
</bearing>
</elevation>
<elevation min='11' max='15'>
<bearing min='0' max='5'>
<gain value='0'/>
</bearing>
<bearing min='6' max='10'>
<gain value='-3'/>
</bearing>
<bearing min='11' max='15'>
<gain value='-6'/>
</bearing>
<bearing min='16' max='344'>
<gain value='-200'/>
</bearing>
<bearing min='345' max='349'>
<gain value='-6'/>
</bearing>
<bearing min='350' max='354'>
<gain value='-3'/>
</bearing>
<bearing min='355' max='359'>
<gain value='0'/>
</bearing>
</elevation>
<elevation min='16' max='90'>
<bearing min='0' max='359'>
<gain value='-200'/>
</bearing>
</elevation>
</antennapattern>
</antennaprofile>
```
Create `/tmp/emane/blockageaft.xml` with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE antennaprofile SYSTEM "file:///usr/share/emane/dtd/antennaprofile.dtd">
<!-- blockage pattern: 1) entire aft in bearing (90 to 270) blocked 2) elevation below -10 blocked, 3) elevation from -10 to -1 is at -10dB to -1 dB 3) elevation from 0 to 90 no blockage-->
<antennaprofile>
<blockagepattern>
<elevation min='-90' max='-11'>
<bearing min='0' max='359'>
<gain value='-200'/>
</bearing>
</elevation>
<elevation min='-10' max='-10'>
<bearing min='0' max='89'>
<gain value='-10'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-10'/>
</bearing>
</elevation>
<elevation min='-9' max='-9'>
<bearing min='0' max='89'>
<gain value='-9'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-9'/>
</bearing>
</elevation>
<elevation min='-8' max='-8'>
<bearing min='0' max='89'>
<gain value='-8'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-8'/>
</bearing>
</elevation>
<elevation min='-7' max='-7'>
<bearing min='0' max='89'>
<gain value='-7'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-7'/>
</bearing>
</elevation>
<elevation min='-6' max='-6'>
<bearing min='0' max='89'>
<gain value='-6'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-6'/>
</bearing>
</elevation>
<elevation min='-5' max='-5'>
<bearing min='0' max='89'>
<gain value='-5'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-5'/>
</bearing>
</elevation>
<elevation min='-4' max='-4'>
<bearing min='0' max='89'>
<gain value='-4'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-4'/>
</bearing>
</elevation>
<elevation min='-3' max='-3'>
<bearing min='0' max='89'>
<gain value='-3'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-3'/>
</bearing>
</elevation>
<elevation min='-2' max='-2'>
<bearing min='0' max='89'>
<gain value='-2'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-2'/>
</bearing>
</elevation>
<elevation min='-1' max='-1'>
<bearing min='0' max='89'>
<gain value='-1'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-1'/>
</bearing>
</elevation>
<elevation min='0' max='90'>
<bearing min='0' max='89'>
<gain value='0'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='0'/>
</bearing>
</elevation>
</blockagepattern>
<blockagepattern>
<elevation min='-90' max='-11'>
<bearing min='0' max='359'>
<gain value='-200'/>
</bearing>
</elevation>
<elevation min='-10' max='-10'>
<bearing min='0' max='89'>
<gain value='-10'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-10'/>
</bearing>
</elevation>
<elevation min='-9' max='-9'>
<bearing min='0' max='89'>
<gain value='-9'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-9'/>
</bearing>
</elevation>
<elevation min='-8' max='-8'>
<bearing min='0' max='89'>
<gain value='-8'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-8'/>
</bearing>
</elevation>
<elevation min='-7' max='-7'>
<bearing min='0' max='89'>
<gain value='-7'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-7'/>
</bearing>
</elevation>
<elevation min='-6' max='-6'>
<bearing min='0' max='89'>
<gain value='-6'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-6'/>
</bearing>
</elevation>
<elevation min='-5' max='-5'>
<bearing min='0' max='89'>
<gain value='-5'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-5'/>
</bearing>
</elevation>
<elevation min='-4' max='-4'>
<bearing min='0' max='89'>
<gain value='-4'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-4'/>
</bearing>
</elevation>
<elevation min='-3' max='-3'>
<bearing min='0' max='89'>
<gain value='-3'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-3'/>
</bearing>
</elevation>
<elevation min='-2' max='-2'>
<bearing min='0' max='89'>
<gain value='-2'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-2'/>
</bearing>
</elevation>
<elevation min='-1' max='-1'>
<bearing min='0' max='89'>
<gain value='-1'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='-1'/>
</bearing>
</elevation>
<elevation min='0' max='90'>
<bearing min='0' max='89'>
<gain value='0'/>
</bearing>
<bearing min='90' max='270'>
<gain value='-200'/>
</bearing>
<bearing min='271' max='359'>
<gain value='0'/>
</bearing>
</elevation>
</blockagepattern>
</antennaprofile>
```
## Run Demo
1. Select `Open...` within the GUI
1. Load `emane-demo-antenna.xml`
1. Click ![Start Button](../static/gui/start.png)
1. After startup completes, double click n1 to bring up the nodes terminal
## Example Demo
This demo will cover running an EMANE event service to feed in antenna,
location, and pathloss events to demonstrate how antenna profiles
can be used.
### EMANE Event Dump
On n1 lets dump EMANE events, so when we later run the EMANE event service
you can monitor when and what is sent.
@ -352,38 +359,44 @@ root@n1:/tmp/pycore.44917/n1.conf# emaneevent-dump -i ctrl0
```
### Send EMANE Events
On the host machine create the following to send EMANE events.
> **WARNING:** make sure to set the `eventservicedevice` to the proper control
> network value
!!! warning
Make sure to set the `eventservicedevice` to the proper control
network value
Create `eventservice.xml` with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventservice SYSTEM "file:///usr/share/emane/dtd/eventservice.dtd">
<eventservice>
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="b.9001.da"/>
<generator definition="eelgenerator.xml"/>
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="b.9001.da"/>
<generator definition="eelgenerator.xml"/>
</eventservice>
```
Create `eelgenerator.xml` with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventgenerator SYSTEM "file:///usr/share/emane/dtd/eventgenerator.dtd">
<eventgenerator library="eelgenerator">
<param name="inputfile" value="scenario.eel" />
<param name="inputfile" value="scenario.eel"/>
<paramlist name="loader">
<item value="commeffect:eelloadercommeffect:delta"/>
<item value="location,velocity,orientation:eelloaderlocation:delta"/>
<item value="pathloss:eelloaderpathloss:delta"/>
<item value="antennaprofile:eelloaderantennaprofile:delta"/>
<item value="commeffect:eelloadercommeffect:delta"/>
<item value="location,velocity,orientation:eelloaderlocation:delta"/>
<item value="pathloss:eelloaderpathloss:delta"/>
<item value="antennaprofile:eelloaderantennaprofile:delta"/>
</paramlist>
</eventgenerator>
```
Create `scenario.eel` with the following contents.
```shell
0.0 nem:1 antennaprofile 1,0.0,0.0
0.0 nem:4 antennaprofile 2,0.0,0.0
@ -413,23 +426,25 @@ Create `scenario.eel` with the following contents.
Run the EMANE event service, monitor what is output on n1 for events
dumped and see the link changes within the CORE GUI.
```shell
emaneeventservice -l 3 eventservice.xml
```
### Stages
The events sent will trigger 4 different states.
* State 1
* n2 and n3 see each other
* n4 and n3 are pointing away
* n2 and n3 see each other
* n4 and n3 are pointing away
* State 2
* n2 and n3 see each other
* n1 and n2 see each other
* n4 and n3 see each other
* n2 and n3 see each other
* n1 and n2 see each other
* n4 and n3 see each other
* State 3
* n2 and n3 see each other
* n4 and n3 are pointing at each other but blocked
* n2 and n3 see each other
* n4 and n3 are pointing at each other but blocked
* State 4
* n2 and n3 see each other
* n4 and n3 see each other
* n2 and n3 see each other
* n4 and n3 see each other

View file

@ -1,44 +1,51 @@
# EMANE Emulation Event Log (EEL) Generator
* Table of Contents
{:toc}
## Overview
Introduction to using the EMANE event service and eel files to provide events.
[EMANE Demo 1](https://github.com/adjacentlink/emane-tutorial/wiki/Demonstration-1)
for more specifics.
## Run Demo
1. Select `Open...` within the GUI
1. Load `emane-demo-eel.xml`
1. Click ![Start Button](../static/gui/start.png)
1. After startup completes, double click n1 to bring up the nodes terminal
2. Load `emane-demo-eel.xml`
3. Click ![Start Button](../static/gui/start.png)
4. After startup completes, double click n1 to bring up the nodes terminal
## Example Demo
This demo will go over defining an EMANE event service and eel file to drive
an emane event service.
### Viewing Events
On n1 we will use the EMANE event dump utility to listen to events.
```shell
root@n1:/tmp/pycore.46777/n1.conf# emaneevent-dump -i ctrl0
```
### Sending Events
On the host machine we will create the following files and start the
EMANE event service targeting the control network.
> **WARNING:** make sure to set the `eventservicedevice` to the proper control
> network value
!!! warning
Make sure to set the `eventservicedevice` to the proper control
network value
Create `eventservice.xml` with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventservice SYSTEM "file:///usr/share/emane/dtd/eventservice.dtd">
<eventservice>
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="b.9001.f"/>
<generator definition="eelgenerator.xml"/>
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="b.9001.f"/>
<generator definition="eelgenerator.xml"/>
</eventservice>
```
@ -57,21 +64,23 @@ These configuration items tell the EEL Generator which sentences to map to
which plugin and whether to issue delta or full updates.
Create `eelgenerator.xml` with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventgenerator SYSTEM "file:///usr/share/emane/dtd/eventgenerator.dtd">
<eventgenerator library="eelgenerator">
<param name="inputfile" value="scenario.eel" />
<param name="inputfile" value="scenario.eel"/>
<paramlist name="loader">
<item value="commeffect:eelloadercommeffect:delta"/>
<item value="location,velocity,orientation:eelloaderlocation:delta"/>
<item value="pathloss:eelloaderpathloss:delta"/>
<item value="antennaprofile:eelloaderantennaprofile:delta"/>
<item value="commeffect:eelloadercommeffect:delta"/>
<item value="location,velocity,orientation:eelloaderlocation:delta"/>
<item value="pathloss:eelloaderpathloss:delta"/>
<item value="antennaprofile:eelloaderantennaprofile:delta"/>
</paramlist>
</eventgenerator>
```
Finally, create `scenario.eel` with the following contents.
```shell
0.0 nem:1 pathloss nem:2,90.0
0.0 nem:2 pathloss nem:1,90.0
@ -80,11 +89,13 @@ Finally, create `scenario.eel` with the following contents.
```
Start the EMANE event service using the files created above.
```shell
emaneeventservice eventservice.xml -l 3
```
### Sent Events
If we go back to look at our original terminal we will see the events logged
out to the terminal.

View file

@ -1,8 +1,7 @@
# EMANE XML Files
* Table of Contents
{:toc}
## Overview
Introduction to the XML files generated by CORE used to drive EMANE for
a given node.
@ -10,12 +9,14 @@ a given node.
may provide more helpful details.
## Run Demo
1. Select `Open...` within the GUI
1. Load `emane-demo-files.xml`
1. Click ![Start Button](../static/gui/start.png)
1. After startup completes, double click n1 to bring up the nodes terminal
2. Load `emane-demo-files.xml`
3. Click ![Start Button](../static/gui/start.png)
4. After startup completes, double click n1 to bring up the nodes terminal
## Example Demo
We will take a look at the files generated in the example demo provided. In this
case we are running the RF Pipe model.
@ -31,6 +32,7 @@ case we are running the RF Pipe model.
| \<interface name>-trans.xml | configuration when a raw transport is being used |
### Listing File
Below are the files within n1 after starting the demo session.
```shell
@ -41,6 +43,7 @@ eth0-phy.xml n1-emane.log usr.local.etc.quagga var.run.quagga
```
### Platform XML
The root configuration file used to run EMANE for a node is the platform xml file.
In this demo we are looking at `n1-platform.xml`.
@ -78,6 +81,7 @@ root@n1:/tmp/pycore.46777/n1.conf# cat n1-platform.xml
```
### NEM XML
The nem definition will contain reference to the transport, mac, and phy xml
definitions being used for a given nem.
@ -93,6 +97,7 @@ root@n1:/tmp/pycore.46777/n1.conf# cat eth0-nem.xml
```
### MAC XML
MAC layer configuration settings would be found in this file. CORE will write
out all values, even if the value is a default value.
@ -115,6 +120,7 @@ root@n1:/tmp/pycore.46777/n1.conf# cat eth0-mac.xml
```
### PHY XML
PHY layer configuration settings would be found in this file. CORE will write
out all values, even if the value is a default value.
@ -149,6 +155,7 @@ root@n1:/tmp/pycore.46777/n1.conf# cat eth0-phy.xml
```
### Transport XML
```shell
root@n1:/tmp/pycore.46777/n1.conf# cat eth0-trans-virtual.xml
<?xml version='1.0' encoding='UTF-8'?>

View file

@ -1,54 +1,62 @@
# EMANE GPSD Integration
* Table of Contents
{:toc}
## Overview
Introduction to integrating gpsd in CORE with EMANE.
[EMANE Demo 0](https://github.com/adjacentlink/emane-tutorial/wiki/Demonstration-0)
may provide more helpful details.
> **WARNING:** requires installation of [gpsd](https://gpsd.gitlab.io/gpsd/index.html)
!!!! warning
Requires installation of [gpsd](https://gpsd.gitlab.io/gpsd/index.html)
## Run Demo
1. Select `Open...` within the GUI
1. Load `emane-demo-gpsd.xml`
1. Click ![Start Button](../static/gui/start.png)
1. After startup completes, double click n1 to bring up the nodes terminal
2. Load `emane-demo-gpsd.xml`
3. Click ![Start Button](../static/gui/start.png)
4. After startup completes, double click n1 to bring up the nodes terminal
## Example Demo
This section will cover how to run a gpsd location agent within EMANE, that will
write out locations to a pseudo terminal file. That file can be read in by the
gpsd server and make EMANE location events available to gpsd clients.
### EMANE GPSD Event Daemon
First create an `eventdaemon.xml` file on n1 with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventdaemon SYSTEM "file:///usr/share/emane/dtd/eventdaemon.dtd">
<eventdaemon nemid="1">
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="ctrl0"/>
<agent definition="gpsdlocationagent.xml"/>
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="ctrl0"/>
<agent definition="gpsdlocationagent.xml"/>
</eventdaemon>
```
Then create the `gpsdlocationagent.xml` file on n1 with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventagent SYSTEM "file:///usr/share/emane/dtd/eventagent.dtd">
<eventagent library="gpsdlocationagent">
<param name="pseudoterminalfile" value="gps.pty"/>
<param name="pseudoterminalfile" value="gps.pty"/>
</eventagent>
```
Start the EMANE event agent. This will facilitate feeding location events
out to a pseudo terminal file defined above.
```shell
emaneeventd eventdaemon.xml -r -d -l 3 -f emaneeventd.log
```
Start gpsd, reading in the pseudo terminal file.
```shell
gpsd -G -n -b $(cat gps.pty)
```
@ -59,36 +67,41 @@ EEL Events will be played out from the actual host machine over the designated
control network interface. Create the following files in the same directory
somewhere on your host.
> **NOTE:** make sure the below eventservicedevice matches the control network
> device being used on the host for EMANE
!!! note
Make sure the below eventservicedevice matches the control network
device being used on the host for EMANE
Create `eventservice.xml` on the host machine with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventservice SYSTEM "file:///usr/share/emane/dtd/eventservice.dtd">
<eventservice>
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="b.9001.1"/>
<generator definition="eelgenerator.xml"/>
<param name="eventservicegroup" value="224.1.2.8:45703"/>
<param name="eventservicedevice" value="b.9001.1"/>
<generator definition="eelgenerator.xml"/>
</eventservice>
```
Create `eelgenerator.xml` on the host machine with the following contents.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eventgenerator SYSTEM "file:///usr/share/emane/dtd/eventgenerator.dtd">
<eventgenerator library="eelgenerator">
<param name="inputfile" value="scenario.eel" />
<param name="inputfile" value="scenario.eel"/>
<paramlist name="loader">
<item value="commeffect:eelloadercommeffect:delta"/>
<item value="location,velocity,orientation:eelloaderlocation:delta"/>
<item value="pathloss:eelloaderpathloss:delta"/>
<item value="antennaprofile:eelloaderantennaprofile:delta"/>
<item value="commeffect:eelloadercommeffect:delta"/>
<item value="location,velocity,orientation:eelloaderlocation:delta"/>
<item value="pathloss:eelloaderpathloss:delta"/>
<item value="antennaprofile:eelloaderantennaprofile:delta"/>
</paramlist>
</eventgenerator>
```
Create `scenario.eel` file with the following contents.
```shell
0.0 nem:1 location gps 40.031075,-74.523518,3.000000
0.0 nem:2 location gps 40.031165,-74.523412,3.000000
@ -96,7 +109,8 @@ Create `scenario.eel` file with the following contents.
Start the EEL event service, which will send the events defined in the file above
over the control network to all EMANE nodes. These location events will be received
and provided to gpsd. This allow gpsd client to connect to and get gps locations.
and provided to gpsd. This allows gpsd client to connect to and get gps locations.
```shell
emaneeventservice eventservice.xml -l 3
```

View file

@ -1,35 +1,40 @@
# EMANE Procomputed
* Table of Contents
{:toc}
## Overview
Introduction to using the precomputed propagation model.
[EMANE Demo 1](https://github.com/adjacentlink/emane-tutorial/wiki/Demonstration-1)
for more specifics.
## Run Demo
1. Select `Open...` within the GUI
1. Load `emane-demo-precomputed.xml`
1. Click ![Start Button](../static/gui/start.png)
1. After startup completes, double click n1 to bring up the nodes terminal
2. Load `emane-demo-precomputed.xml`
3. Click ![Start Button](../static/gui/start.png)
4. After startup completes, double click n1 to bring up the nodes terminal
## Example Demo
This demo is uing the RF Pipe model witht he propagation model set to
This demo is using the RF Pipe model with the propagation model set to
precomputed.
### Failed Pings
Due to using precomputed and having not sent any pathloss events, the nodes
cannot ping eachother yet.
cannot ping each other yet.
Open a terminal on n1.
```shell
root@n1:/tmp/pycore.46777/n1.conf# ping 10.0.0.2
connect: Network is unreachable
```
### EMANE Shell
You can leverage `emanesh` to investigate why packets are being dropped.
```shell
root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy BroadcastPacketDropTable0 UnicastPacketDropTable0
nem 1 phy BroadcastPacketDropTable0
@ -43,6 +48,7 @@ nem 1 phy UnicastPacketDropTable0
In the example above we can see that the reason packets are being dropped is due to
the propogation model and that is because we have not issued any pathloss events.
You can run another command to validate if you have received any pathloss events.
```shell
root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy PathlossEventInfoTable
nem 1 phy PathlossEventInfoTable
@ -50,15 +56,19 @@ nem 1 phy PathlossEventInfoTable
```
### Pathloss Events
On the host we will send pathloss events from all nems to all other nems.
> **NOTE:** make sure properly specify the right control network device
!!! note
Make sure properly specify the right control network device
```shell
emaneevent-pathloss 1:2 90 -i <controlnet device>
```
Now if we check for pathloss events on n2 we will see what was just sent above.
```shell
root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy PathlossEventInfoTable
nem 1 phy PathlossEventInfoTable
@ -67,6 +77,7 @@ nem 1 phy PathlossEventInfoTable
```
You should also now be able to ping n1 from n2.
```shell
root@n1:/tmp/pycore.46777/n1.conf# ping -c 3 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.

View file

@ -1,7 +1,6 @@
# gRPC API
* Table of Contents
{:toc}
## Overview
[gRPC](https://grpc.io/) is a client/server API for interfacing with CORE
and used by the python GUI for driving all functionality. It is dependent
@ -9,7 +8,7 @@ on having a running `core-daemon` instance to be leveraged.
A python client can be created from the raw generated grpc files included
with CORE or one can leverage a provided gRPC client that helps encapsulate
some of the functionality to try and help make things easier.
some functionality to try and help make things easier.
## Python Client
@ -19,7 +18,7 @@ to help provide some conveniences when using the API.
### Client HTTP Proxy
Since gRPC is HTTP2 based, proxy configurations can cause issues. By default
Since gRPC is HTTP2 based, proxy configurations can cause issues. By default,
the client disables proxy support to avoid issues when a proxy is present.
You can enable and properly account for this issue when needed.
@ -41,13 +40,13 @@ When creating nodes of type `NodeType.DEFAULT` these are the default models
and the services they map to.
* mdr
* zebra, OSPFv3MDR, IPForward
* zebra, OSPFv3MDR, IPForward
* PC
* DefaultRoute
* DefaultRoute
* router
* zebra, OSPFv2, OSPFv3, IPForward
* zebra, OSPFv2, OSPFv3, IPForward
* host
* DefaultRoute, SSH
* DefaultRoute, SSH
### Interface Helper
@ -56,8 +55,10 @@ when creating interface data for nodes. Alternatively one can manually create
a `core.api.grpc.wrappers.Interface` class instead with appropriate information.
Manually creating gRPC client interface:
```python
from core.api.grpc.wrappers import Interface
# id is optional and will set to the next available id
# name is optional and will default to eth<id>
# mac is optional and will result in a randomly generated mac
@ -72,6 +73,7 @@ iface = Interface(
```
Leveraging the interface helper class:
```python
from core.api.grpc import client
@ -90,6 +92,7 @@ iface_data = iface_helper.create_iface(
Various events that can occur within a session can be listened to.
Event types:
* session - events for changes in session state and mobility start/stop/pause
* node - events for node movements and icon changes
* link - events for link configuration changes and wireless link add/delete
@ -101,9 +104,11 @@ Event types:
from core.api.grpc import client
from core.api.grpc.wrappers import EventType
def event_listener(event):
print(event)
# create grpc client and connect
core = client.CoreGrpcClient()
core.connect()
@ -123,6 +128,7 @@ core.events(session.id, event_listener, [EventType.NODE])
Links can be configured at the time of creation or during runtime.
Currently supported configuration options:
* bandwidth (bps)
* delay (us)
* duplicate (%)
@ -167,10 +173,11 @@ core.edit_link(session.id, link)
```
### Peer to Peer Example
```python
# required imports
from core.api.grpc import client
from core.api.grpc.core_pb2 import Position
from core.api.grpc.wrappers import Position
# interface helper
iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
@ -198,10 +205,11 @@ core.start_session(session)
```
### Switch/Hub Example
```python
# required imports
from core.api.grpc import client
from core.api.grpc.core_pb2 import NodeType, Position
from core.api.grpc.wrappers import NodeType, Position
# interface helper
iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
@ -232,10 +240,11 @@ core.start_session(session)
```
### WLAN Example
```python
# required imports
from core.api.grpc import client
from core.api.grpc.core_pb2 import NodeType, Position
from core.api.grpc.wrappers import NodeType, Position
# interface helper
iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64")
@ -283,6 +292,7 @@ For EMANE you can import and use one of the existing models and
use its name for configuration.
Current models:
* core.emane.ieee80211abg.EmaneIeee80211abgModel
* core.emane.rfpipe.EmaneRfPipeModel
* core.emane.tdma.EmaneTdmaModel
@ -299,7 +309,7 @@ will use the defaults. When no configuration is used, the defaults are used.
```python
# required imports
from core.api.grpc import client
from core.api.grpc.core_pb2 import NodeType, Position
from core.api.grpc.wrappers import NodeType, Position
from core.emane.models.ieee80211abg import EmaneIeee80211abgModel
# interface helper
@ -315,7 +325,7 @@ session = core.create_session()
# create nodes
position = Position(x=200, y=200)
emane = session.add_node(
1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name
1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name
)
position = Position(x=100, y=100)
node1 = session.add_node(2, model="mdr", position=position)
@ -330,8 +340,8 @@ session.add_link(node1=node2, node2=emane, iface1=iface1)
# setting emane specific emane model configuration
emane.set_emane_model(EmaneIeee80211abgModel.name, {
"eventservicettl": "2",
"unicastrate": "3",
"eventservicettl": "2",
"unicastrate": "3",
})
# start session
@ -339,6 +349,7 @@ core.start_session(session)
```
EMANE Model Configuration:
```python
# emane network specific config, set on an emane node
# this setting applies to all nodes connected
@ -359,6 +370,7 @@ Configuring the files of a service results in a specific hard coded script being
generated, instead of the default scripts, that may leverage dynamic generation.
The following features can be configured for a service:
* files - files that will be generated
* directories - directories that will be mounted unique to the node
* startup - commands to run start a service
@ -366,6 +378,7 @@ The following features can be configured for a service:
* shutdown - commands to run to stop a service
Editing service properties:
```python
# configure a service, for a node, for a given session
node.service_configs[service_name] = NodeServiceData(
@ -381,6 +394,7 @@ When editing a service file, it must be the name of `config`
file that the service will generate.
Editing a service file:
```python
# to edit the contents of a generated file you can specify
# the service, the file name, and its contents

View file

@ -1,9 +1,5 @@
# CORE GUI
* Table of Contents
{:toc}
![](static/core-gui.png)
## Overview
@ -12,7 +8,7 @@ The GUI is used to draw nodes and network devices on a canvas, linking them
together to create an emulated network session.
After pressing the start button, CORE will proceed through these phases,
staying in the **runtime** phase. After the session is stopped, CORE will
staying in the **runtime** phase. After the session is stopped, CORE will
proceed to the **data collection** phase before tearing down the emulated
state.
@ -22,7 +18,7 @@ when these session states are reached.
## Prerequisites
Beyond installing CORE, you must have the CORE daemon running. This is done
Beyond installing CORE, you must have the CORE daemon running. This is done
on the command line with either systemd or sysv.
```shell
@ -40,24 +36,24 @@ The GUI will create a directory in your home directory on first run called
~/.coregui. This directory will help layout various files that the GUI may use.
* .coregui/
* backgrounds/
* place backgrounds used for display in the GUI
* custom_emane/
* place to keep custom emane models to use with the core-daemon
* custom_services/
* place to keep custom services to use with the core-daemon
* icons/
* icons the GUI uses along with customs icons desired
* mobility/
* place to keep custom mobility files
* scripts/
* place to keep core related scripts
* xmls/
* place to keep saved session xml files
* gui.log
* log file when running the gui, look here when issues occur for exceptions etc
* config.yaml
* configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc)
* backgrounds/
* place backgrounds used for display in the GUI
* custom_emane/
* place to keep custom emane models to use with the core-daemon
* custom_services/
* place to keep custom services to use with the core-daemon
* icons/
* icons the GUI uses along with customs icons desired
* mobility/
* place to keep custom mobility files
* scripts/
* place to keep core related scripts
* xmls/
* place to keep saved session xml files
* gui.log
* log file when running the gui, look here when issues occur for exceptions etc
* config.yaml
* configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc)
## Modes of Operation
@ -309,168 +305,6 @@ and options.
| CORE Documentation (www) | Lnk to the CORE Documentation page. |
| About | Invokes the About dialog box for viewing version information. |
## Connecting with Physical Networks
CORE's emulated networks run in real time, so they can be connected to live
physical networks. The RJ45 tool and the Tunnel tool help with connecting to
the real world. These tools are available from the **Link-layer nodes** menu.
When connecting two or more CORE emulations together, MAC address collisions
should be avoided. CORE automatically assigns MAC addresses to interfaces when
the emulation is started, starting with **00:00:00:aa:00:00** and incrementing
the bottom byte. The starting byte should be changed on the second CORE machine
using the **Tools->MAC Addresses** option the menu.
### RJ45 Tool
The RJ45 node in CORE represents a physical interface on the real CORE machine.
Any real-world network device can be connected to the interface and communicate
with the CORE nodes in real time.
The main drawback is that one physical interface is required for each
connection. When the physical interface is assigned to CORE, it may not be used
for anything else. Another consideration is that the computer or network that
you are connecting to must be co-located with the CORE machine.
To place an RJ45 connection, click on the **Link-layer nodes** toolbar and select
the **RJ45 Tool** from the submenu. Click on the canvas near the node you want to
connect to. This could be a router, hub, switch, or WLAN, for example. Now
click on the *Link Tool* and draw a link between the RJ45 and the other node.
The RJ45 node will display "UNASSIGNED". Double-click the RJ45 node to assign a
physical interface. A list of available interfaces will be shown, and one may
be selected by double-clicking its name in the list, or an interface name may
be entered into the text box.
> **NOTE:** When you press the Start button to instantiate your topology, the
interface assigned to the RJ45 will be connected to the CORE topology. The
interface can no longer be used by the system.
Multiple RJ45 nodes can be used within CORE and assigned to the same physical
interface if 802.1x VLANs are used. This allows for more RJ45 nodes than
physical ports are available, but the (e.g. switching) hardware connected to
the physical port must support the VLAN tagging, and the available bandwidth
will be shared.
You need to create separate VLAN virtual devices on the Linux host,
and then assign these devices to RJ45 nodes inside of CORE. The VLANning is
actually performed outside of CORE, so when the CORE emulated node receives a
packet, the VLAN tag will already be removed.
Here are example commands for creating VLAN devices under Linux:
```shell
ip link add link eth0 name eth0.1 type vlan id 1
ip link add link eth0 name eth0.2 type vlan id 2
ip link add link eth0 name eth0.3 type vlan id 3
```
### Tunnel Tool
The tunnel tool builds GRE tunnels between CORE emulations or other hosts.
Tunneling can be helpful when the number of physical interfaces is limited or
when the peer is located on a different network. Also a physical interface does
not need to be dedicated to CORE as with the RJ45 tool.
The peer GRE tunnel endpoint may be another CORE machine or another
host that supports GRE tunneling. When placing a Tunnel node, initially
the node will display "UNASSIGNED". This text should be replaced with the IP
address of the tunnel peer. This is the IP address of the other CORE machine or
physical machine, not an IP address of another virtual node.
> **NOTE:** Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices. The *gretap* device
has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
bridge's MTU
becomes 1,458 bytes. The Linux bridge will not perform fragmentation for
large packets if other bridge ports have a higher MTU such as 1,500 bytes.
The GRE key is used to identify flows with GRE tunneling. This allows multiple
GRE tunnels to exist between that same pair of tunnel peers. A unique number
should be used when multiple tunnels are used with the same peer. When
configuring the peer side of the tunnel, ensure that the matching keys are
used.
Here are example commands for building the other end of a tunnel on a Linux
machine. In this example, a router in CORE has the virtual address
**10.0.0.1/24** and the CORE host machine has the (real) address
**198.51.100.34/24**. The Linux box
that will connect with the CORE machine is reachable over the (real) network
at **198.51.100.76/24**.
The emulated router is linked with the Tunnel Node. In the
Tunnel Node configuration dialog, the address **198.51.100.76** is entered, with
the key set to **1**. The gretap interface on the Linux box will be assigned
an address from the subnet of the virtual router node,
**10.0.0.2/24**.
```shell
# these commands are run on the tunnel peer
sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1
sudo ip addr add 10.0.0.2/24 dev gt0
sudo ip link set dev gt0 up
```
Now the virtual router should be able to ping the Linux machine:
```shell
# from the CORE router node
ping 10.0.0.2
```
And the Linux machine should be able to ping inside the CORE emulation:
```shell
# from the tunnel peer
ping 10.0.0.1
```
To debug this configuration, **tcpdump** can be run on the gretap devices, or
on the physical interfaces on the CORE or Linux machines. Make sure that a
firewall is not blocking the GRE traffic.
### Communicating with the Host Machine
The host machine that runs the CORE GUI and/or daemon is not necessarily
accessible from a node. Running an X11 application on a node, for example,
requires some channel of communication for the application to connect with
the X server for graphical display. There are different ways to
connect from the node to the host and vice versa.
#### Control Network
The quickest way to connect with the host machine through the primary control
network.
With a control network, the host can launch an X11 application on a node.
To run an X11 application on the node, the **SSH** service can be enabled on
the node, and SSH with X11 forwarding can be used from the host to the node.
```shell
# SSH from host to node n5 to run an X11 app
ssh -X 172.16.0.5 xclock
```
#### Other Methods
There are still other ways to connect a host with a node. The RJ45 Tool
can be used in conjunction with a dummy interface to access a node:
```shell
sudo modprobe dummy numdummies=1
```
A **dummy0** interface should appear on the host. Use the RJ45 tool assigned
to **dummy0**, and link this to a node in your scenario. After starting the
session, configure an address on the host.
```shell
sudo ip link show type bridge
# determine bridge name from the above command
# assign an IP address on the same network as the linked node
sudo ip addr add 10.0.1.2/24 dev b.48304.34658
```
In the example shown above, the host will have the address **10.0.1.2** and
the node linked to the RJ45 may have the address **10.0.1.1**.
## Building Sample Networks
### Wired Networks
@ -501,21 +335,20 @@ CORE offers several levels of wireless emulation fidelity, depending on modeling
hardware.
* WLAN Node
* uses set bandwidth, delay, and loss
* links are enabled or disabled based on a set range
* uses the least CPU when moving, but nothing extra when not moving
* uses set bandwidth, delay, and loss
* links are enabled or disabled based on a set range
* uses the least CPU when moving, but nothing extra when not moving
* Wireless Node
* uses set bandwidth, delay, and initial loss
* loss dynamically changes based on distance between nodes, which can be configured with range parameters
* links are enabled or disabled based on a set range
* uses more CPU to calculate loss for every movement, but nothing extra when not moving
* uses set bandwidth, delay, and initial loss
* loss dynamically changes based on distance between nodes, which can be configured with range parameters
* links are enabled or disabled based on a set range
* uses more CPU to calculate loss for every movement, but nothing extra when not moving
* EMANE Node
* uses a physical layer model to account for signal propagation, antenna profile effects and interference
sources in order to provide a realistic environment for wireless experimentation
* uses the most CPU for every packet, as complex calculations are used for fidelity
* See [Wiki](https://github.com/adjacentlink/emane/wiki) for details on general EMANE usage
* See [CORE EMANE](emane.md) for details on using EMANE in CORE
* uses a physical layer model to account for signal propagation, antenna profile effects and interference
sources in order to provide a realistic environment for wireless experimentation
* uses the most CPU for every packet, as complex calculations are used for fidelity
* See [Wiki](https://github.com/adjacentlink/emane/wiki) for details on general EMANE usage
* See [CORE EMANE](emane.md) for details on using EMANE in CORE
| Model | Type | Supported Platform(s) | Fidelity | Description |
|----------|--------|-----------------------|----------|-------------------------------------------------------------------------------|
@ -545,7 +378,7 @@ The default configuration of the WLAN is set to use the basic range model. Havin
selected causes **core-daemon** to calculate the distance between nodes based
on screen pixels. A numeric range in screen pixels is set for the wireless
network using the **Range** slider. When two wireless nodes are within range of
each other, a green line is drawn between them and they are linked. Two
each other, a green line is drawn between them and they are linked. Two
wireless nodes that are farther than the range pixels apart are not linked.
During Execute mode, users may move wireless nodes around by clicking and
dragging them, and wireless links will be dynamically made or broken.
@ -561,7 +394,8 @@ CORE has a few ways to script mobility.
| EMANE events | See [EMANE](emane.md) for details on using EMANE scripts to move nodes around. Location information is typically given as latitude, longitude, and altitude. |
For the first method, you can create a mobility script using a text
editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate the script with one of the wireless
editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate
the script with one of the wireless
using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
button, and set the *mobility script file* field in the resulting *ns2script*
configuration dialog.

127
docs/hitl.md Normal file
View file

@ -0,0 +1,127 @@
# Hardware In The Loop
## Overview
In some cases it may be impossible or impractical to run software using CORE
nodes alone. You may need to bring in external hardware into the network.
CORE's emulated networks run in real time, so they can be connected to live
physical networks. The RJ45 tool and the Tunnel tool help with connecting to
the real world. These tools are available from the **Link Layer Nodes** menu.
When connecting two or more CORE emulations together, MAC address collisions
should be avoided. CORE automatically assigns MAC addresses to interfaces when
the emulation is started, starting with **00:00:00:aa:00:00** and incrementing
the bottom byte. The starting byte should be changed on the second CORE machine
using the **Tools->MAC Addresses** option the menu.
## RJ45 Node
CORE provides the RJ45 node, which represents a physical
interface within the host that is running CORE. Any real-world network
devices can be connected to the interface and communicate with the CORE nodes in real time.
The main drawback is that one physical interface is required for each
connection. When the physical interface is assigned to CORE, it may not be used
for anything else. Another consideration is that the computer or network that
you are connecting to must be co-located with the CORE machine.
### GUI Usage
To place an RJ45 connection, click on the **Link Layer Nodes** toolbar and select
the **RJ45 Node** from the options. Click on the canvas, where you would like
the nodes to place. Now click on the **Link Tool** and draw a link between the RJ45
and the other node you wish to be connected to. The RJ45 node will display "UNASSIGNED".
Double-click the RJ45 node to assign a physical interface. A list of available
interfaces will be shown, and one may be selected, then selecting **Apply**.
!!! note
When you press the Start button to instantiate your topology, the
interface assigned to the RJ45 will be connected to the CORE topology. The
interface can no longer be used by the system.
### Multiple RJ45s with One Interface (VLAN)
It is possible to have multiple RJ45 nodes using the same physical interface
by leveraging 802.1x VLANs. This allows for more RJ45 nodes than physical ports
are available, but the (e.g. switching) hardware connected to the physical port
must support the VLAN tagging, and the available bandwidth will be shared.
You need to create separate VLAN virtual devices on the Linux host,
and then assign these devices to RJ45 nodes inside of CORE. The VLANing is
actually performed outside of CORE, so when the CORE emulated node receives a
packet, the VLAN tag will already be removed.
Here are example commands for creating VLAN devices under Linux:
```shell
ip link add link eth0 name eth0.1 type vlan id 1
ip link add link eth0 name eth0.2 type vlan id 2
ip link add link eth0 name eth0.3 type vlan id 3
```
## Tunnel Tool
The tunnel tool builds GRE tunnels between CORE emulations or other hosts.
Tunneling can be helpful when the number of physical interfaces is limited or
when the peer is located on a different network. In this case a physical interface does
not need to be dedicated to CORE as with the RJ45 tool.
The peer GRE tunnel endpoint may be another CORE machine or another
host that supports GRE tunneling. When placing a Tunnel node, initially
the node will display "UNASSIGNED". This text should be replaced with the IP
address of the tunnel peer. This is the IP address of the other CORE machine or
physical machine, not an IP address of another virtual node.
!!! note
Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices.
The *gretap* device has an interface MTU of 1,458 bytes; when joined to a Linux
bridge, the bridge's MTU becomes 1,458 bytes. The Linux bridge will not perform
fragmentation for large packets if other bridge ports have a higher MTU such
as 1,500 bytes.
The GRE key is used to identify flows with GRE tunneling. This allows multiple
GRE tunnels to exist between that same pair of tunnel peers. A unique number
should be used when multiple tunnels are used with the same peer. When
configuring the peer side of the tunnel, ensure that the matching keys are
used.
### Example Usage
Here are example commands for building the other end of a tunnel on a Linux
machine. In this example, a router in CORE has the virtual address
**10.0.0.1/24** and the CORE host machine has the (real) address
**198.51.100.34/24**. The Linux box
that will connect with the CORE machine is reachable over the (real) network
at **198.51.100.76/24**.
The emulated router is linked with the Tunnel Node. In the
Tunnel Node configuration dialog, the address **198.51.100.76** is entered, with
the key set to **1**. The gretap interface on the Linux box will be assigned
an address from the subnet of the virtual router node,
**10.0.0.2/24**.
```shell
# these commands are run on the tunnel peer
sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1
sudo ip addr add 10.0.0.2/24 dev gt0
sudo ip link set dev gt0 up
```
Now the virtual router should be able to ping the Linux machine:
```shell
# from the CORE router node
ping 10.0.0.2
```
And the Linux machine should be able to ping inside the CORE emulation:
```shell
# from the tunnel peer
ping 10.0.0.1
```
To debug this configuration, **tcpdump** can be run on the gretap devices, or
on the physical interfaces on the CORE or Linux machines. Make sure that a
firewall is not blocking the GRE traffic.

View file

@ -4,32 +4,15 @@
CORE (Common Open Research Emulator) is a tool for building virtual networks. As an emulator, CORE builds a
representation of a real computer network that runs in real time, as opposed to simulation, where abstract models are
used. The live-running emulation can be connected to physical networks and routers. It provides an environment for
used. The live-running emulation can be connected to physical networks and routers. It provides an environment for
running real applications and protocols, taking advantage of tools provided by the Linux operating system.
CORE is typically used for network and protocol research, demonstrations, application and platform testing, evaluating
networking scenarios, security studies, and increasing the size of physical test networks.
### Key Features
* Efficient and scalable
* Runs applications and protocols without modification
* Drag and drop GUI
* Highly customizable
## Topics
| Topic | Description |
|--------------------------------------|-------------------------------------------------------------------|
| [Installation](install.md) | How to install CORE and its requirements |
| [Architecture](architecture.md) | Overview of the architecture |
| [Node Types](nodetypes.md) | Overview of node types supported within CORE |
| [GUI](gui.md) | How to use the GUI |
| [Python API](python.md) | Covers how to control core directly using python |
| [gRPC API](grpc.md) | Covers how control core using gRPC |
| [Distributed](distributed.md) | Details for running CORE across multiple servers |
| [Control Network](ctrlnet.md) | How to use control networks to communicate with nodes from host |
| [Config Services](configservices.md) | Overview of provided config services and creating custom ones |
| [Services](services.md) | Overview of provided services and creating custom ones |
| [EMANE](emane.md) | Overview of EMANE integration and integrating custom EMANE models |
| [Performance](performance.md) | Notes on performance when using CORE |
| [Developers Guide](devguide.md) | Overview on how to contribute to CORE |

View file

@ -1,11 +1,12 @@
# Installation
* Table of Contents
{:toc}
> **WARNING:** if Docker is installed, the default iptable rules will block CORE traffic
!!! warning
If Docker is installed, the default iptable rules will block CORE traffic
## Overview
CORE currently supports and provides the following install options, with the package
CORE currently supports and provides the following installation options, with the package
option being preferred.
* [Package based install (rpm/deb)](#package-based-install)
@ -13,6 +14,7 @@ option being preferred.
* [Dockerfile based install](#dockerfile-based-install)
### Requirements
Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous
containers, as a general rule you should select a machine having as much RAM and CPU resources as possible.
@ -21,31 +23,35 @@ containers, as a general rule you should select a machine having as much RAM and
* nftables compatible kernel and nft command line tool
### Supported Linux Distributions
Plan is to support recent Ubuntu and CentOS LTS releases.
Verified:
* Ubuntu - 18.04, 20.04, 22.04
* CentOS - 7.8
### Files
The following is a list of files that would be installed after installation.
* executables
* `<prefix>/bin/{vcmd, vnode}`
* can be adjusted using script based install , package will be /usr
* `<prefix>/bin/{vcmd, vnode}`
* can be adjusted using script based install , package will be /usr
* python files
* virtual environment `/opt/core/venv`
* local install will be local to the python version used
* `python3 -c "import core; print(core.__file__)"`
* scripts {core-daemon, core-cleanup, etc}
* virtualenv `/opt/core/venv/bin`
* local `/usr/local/bin`
* virtual environment `/opt/core/venv`
* local install will be local to the python version used
* `python3 -c "import core; print(core.__file__)"`
* scripts {core-daemon, core-cleanup, etc}
* virtualenv `/opt/core/venv/bin`
* local `/usr/local/bin`
* configuration files
* `/etc/core/{core.conf, logging.conf}`
* `/etc/core/{core.conf, logging.conf}`
* ospf mdr repository files when using script based install
* `<repo>/../ospf-mdr`
* `<repo>/../ospf-mdr`
### Installed Scripts
The following python scripts are provided.
| Name | Description |
@ -59,17 +65,20 @@ The following python scripts are provided.
| core-service-update | tool to update automate modifying a legacy service to match current naming |
### Upgrading from Older Release
Please make sure to uninstall any previous installations of CORE cleanly
before proceeding to install.
Clearing out a current install from 7.0.0+, making sure to provide options
used for install (`-l` or `-p`).
```shell
cd <CORE_REPO>
inv uninstall <options>
```
Previous install was built from source for CORE release older than 7.0.0:
```shell
cd <CORE_REPO>
sudo make uninstall
@ -78,6 +87,7 @@ make clean
```
Installed from previously built packages:
```shell
# centos
sudo yum remove core
@ -103,10 +113,13 @@ The built packages will require and install system level dependencies, as well a
a post install script to install the provided CORE python wheel. A similar uninstall script
is ran when uninstalling and would require the same options as given, during the install.
> **NOTE:** PYTHON defaults to python3 for installs below, CORE requires python3.9+, pip,
> tk compatibility for python gui, and venv for virtual environments
!!! note
PYTHON defaults to python3 for installs below, CORE requires python3.9+, pip,
tk compatibility for python gui, and venv for virtual environments
Examples for install:
```shell
# recommended to upgrade to the latest version of pip before installation
# in python, can help avoid building from source issues
@ -128,6 +141,7 @@ sudo <python> -m pip install /opt/core/core-<version>-py3-none-any.whl
```
Example for removal, requires using the same options as install:
```shell
# remove a standard install
sudo <yum/apt> remove core
@ -142,6 +156,7 @@ sudo NO_PYTHON=1 <yum/apt> remove core
```
### Installing OSPF MDR
You will need to manually install OSPF MDR for routing nodes, since this is not
provided by the package.
@ -159,6 +174,7 @@ sudo make install
When done see [Post Install](#post-install).
## Script Based Install
The script based installation will install system level dependencies, python library and
dependencies, as well as dependencies for building CORE.
@ -166,17 +182,22 @@ The script based install also automatically builds and installs OSPF MDR, used b
on routing nodes. This can optionally be skipped.
Installaion will carry out the following steps:
* installs system dependencies for building core
* builds vcmd/vnoded and python grpc files
* installs core into poetry managed virtual environment or locally, if flag is passed
* installs systemd service pointing to appropriate python location based on install type
* clone/build/install working version of [OPSF MDR](https://github.com/USNavalResearchLaboratory/ospf-mdr)
> **NOTE:** installing locally comes with its own risks, it can result it potential
> dependency conflicts with system package manager installed python dependencies
!!! note
> **NOTE:** provide a prefix that will be found on path when running as sudo,
> if the default prefix /usr/local will not be valid
Installing locally comes with its own risks, it can result it potential
dependency conflicts with system package manager installed python dependencies
!!! note
Provide a prefix that will be found on path when running as sudo,
if the default prefix /usr/local will not be valid
The following tools will be leveraged during installation:
@ -188,6 +209,7 @@ The following tools will be leveraged during installation:
| [poetry](https://python-poetry.org/) | used to install python virtual environment or building a python wheel |
First we will need to clone and navigate to the CORE repo.
```shell
# clone CORE repo
git clone https://github.com/coreemu/core.git
@ -229,6 +251,7 @@ Options:
When done see [Post Install](#post-install).
### Unsupported Linux Distribution
For unsupported OSs you could attempt to do the following to translate
an installation to your use case.
@ -243,6 +266,7 @@ inv install --dry -v -p <prefix> -i <install type>
```
## Dockerfile Based Install
You can leverage one of the provided Dockerfiles, to run and launch CORE within a Docker container.
Since CORE nodes will leverage software available within the system for a given use case,
@ -253,7 +277,7 @@ make sure to update and build the Dockerfile with desired software.
git clone https://github.com/coreemu/core.git
cd core
# build image
sudo docker build -t core -f Dockerfile.<centos,ubuntu,oracle> .
sudo docker build -t core -f dockerfiles/Dockerfile.<centos,ubuntu,oracle> .
# start container
sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged core
# enable xhost access to the root user
@ -265,7 +289,10 @@ sudo docker exec -it core core-gui
When done see [Post Install](#post-install).
## Installing EMANE
> **NOTE:** installing EMANE for the virtual environment is known to work for 1.21+
!!! note
Installing EMANE for the virtual environment is known to work for 1.21+
The recommended way to install EMANE is using prebuilt packages, otherwise
you can follow their instructions for installing from source. Installation
@ -282,6 +309,7 @@ Also, these EMANE bindings need to be built using `protoc` 3.19+. So make sure
that is available and being picked up on PATH properly.
Examples for building and installing EMANE python bindings for use in CORE:
```shell
# if your system does not have protoc 3.19+
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
@ -306,32 +334,39 @@ inv install-emane -e <version tag>
```
## Post Install
After installation completes you are now ready to run CORE.
### Resolving Docker Issues
If you have Docker installed, by default it will change the iptables
forwarding chain to drop packets, which will cause issues for CORE traffic.
You can temporarily resolve the issue with the following command:
```shell
sudo iptables --policy FORWARD ACCEPT
```
Alternatively, you can configure Docker to avoid doing this, but will likely
break normal Docker networking usage. Using the setting below will require
a restart.
Place the file contents below in **/etc/docker/docker.json**
```json
{
"iptables": false
"iptables": false
}
```
### Resolving Path Issues
One problem running CORE you may run into, using the virtual environment or locally
can be issues related to your path.
To add support for your user to run scripts from the virtual environment:
```shell
# can add to ~/.bashrc
export PATH=$PATH:/opt/core/venv/bin
@ -339,6 +374,7 @@ export PATH=$PATH:/opt/core/venv/bin
This will not solve the path issue when running as sudo, so you can do either
of the following to compensate.
```shell
# run command passing in the right PATH to pickup from the user running the command
sudo env PATH=$PATH core-daemon
@ -350,6 +386,7 @@ sudop core-daemon
```
### Running CORE
The following assumes I have resolved PATH issues and setup the `sudop` alias.
```shell
@ -360,6 +397,7 @@ core-gui
```
### Enabling Service
After installation, the core service is not enabled by default. If you desire to use the
service, run the following commands.

View file

@ -1,5 +1,7 @@
# Install CentOS
## Overview
Below is a detailed path for installing CORE and related tooling on a fresh
CentOS 7 install. Both of the examples below will install CORE into its
own virtual environment located at **/opt/core/venv**. Both examples below
@ -122,6 +124,7 @@ The CORE virtual environment and related scripts will not be found on your PATH,
so some adjustments needs to be made.
To add support for your user to run scripts from the virtual environment:
```shell
# can add to ~/.bashrc
export PATH=$PATH:/opt/core/venv/bin
@ -129,6 +132,7 @@ export PATH=$PATH:/opt/core/venv/bin
This will not solve the path issue when running as sudo, so you can do either
of the following to compensate.
```shell
# run command passing in the right PATH to pickup from the user running the command
sudo env PATH=$PATH core-daemon

View file

@ -1,7 +1,9 @@
# Install Ubuntu
## Overview
Below is a detailed path for installing CORE and related tooling on a fresh
Ubuntu 22.04 install. Both of the examples below will install CORE into its
Ubuntu 22.04 installation. Both of the examples below will install CORE into its
own virtual environment located at **/opt/core/venv**. Both examples below
also assume using **~/Documents** as the working directory.
@ -94,6 +96,7 @@ The CORE virtual environment and related scripts will not be found on your PATH,
so some adjustments needs to be made.
To add support for your user to run scripts from the virtual environment:
```shell
# can add to ~/.bashrc
export PATH=$PATH:/opt/core/venv/bin
@ -101,6 +104,7 @@ export PATH=$PATH:/opt/core/venv/bin
This will not solve the path issue when running as sudo, so you can do either
of the following to compensate.
```shell
# run command passing in the right PATH to pickup from the user running the command
sudo env PATH=$PATH core-daemon

View file

@ -1,5 +1,7 @@
# LXC Support
## Overview
LXC nodes are provided by way of LXD to create nodes using predefined
images and provide file system separation.

View file

@ -1,7 +1,4 @@
# CORE Node Types
* Table of Contents
{:toc}
# Node Types
## Overview

View file

@ -1,8 +1,5 @@
# CORE Performance
* Table of Contents
{:toc}
## Overview
The top question about the performance of CORE is often *how many nodes can it
@ -16,7 +13,6 @@ handle?* The answer depends on several factors:
| Network traffic | the more packets that are sent around the virtual network increases the amount of CPU usage. |
| GUI usage | widgets that run periodically, mobility scenarios, and other GUI interactions generally consume CPU cycles that may be needed for emulation. |
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux,
we have found it reasonable to run 30-75 nodes running OSPFv2 and OSPFv3
routing. On this hardware CORE can instantiate 100 or more nodes, but at
@ -32,15 +28,17 @@ the number of times the system as a whole needed to deal with a packet. As
more network hops are added, this increases the number of context switches
and decreases the throughput seen on the full length of the network path.
> **NOTE:** The right question to be asking is *"how much traffic?"*, not
*"how many nodes?"*.
!!! note
The right question to be asking is *"how much traffic?"*, not
*"how many nodes?"*.
For a more detailed study of performance in CORE, refer to the following
publications:
* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE
Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings
of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time
network emulator, Proceedings of IEEE MILCOM Conference, 2008.
network emulator, Proceedings of IEEE MILCOM Conference, 2008.

View file

@ -1,8 +1,5 @@
# Python API
* Table of Contents
{:toc}
## Overview
Writing your own Python scripts offers a rich programming environment with

View file

@ -1,9 +1,6 @@
# CORE Services
# Services (Deprecated)
* Table of Contents
{:toc}
## Services
## Overview
CORE uses the concept of services to specify what processes or scripts run on a
node when it is started. Layer-3 nodes such as routers and PCs are defined by
@ -15,9 +12,11 @@ set of default services. Each service defines the per-node directories,
configuration files, startup index, starting commands, validation commands,
shutdown commands, and meta-data associated with a node.
> **NOTE:** **Network namespace nodes do not undergo the normal Linux boot process**
using the **init**, **upstart**, or **systemd** frameworks. These
lightweight nodes use configured CORE *services*.
!!! note
**Network namespace nodes do not undergo the normal Linux boot process**
using the **init**, **upstart**, or **systemd** frameworks. These
lightweight nodes use configured CORE *services*.
## Available Services
@ -71,11 +70,13 @@ the service customization dialog for that service.
The dialog has three tabs for configuring the different aspects of the service:
files, directories, and startup/shutdown.
> **NOTE:** A **yellow** customize icon next to a service indicates that service
requires customization (e.g. the *Firewall* service).
A **green** customize icon indicates that a custom configuration exists.
Click the *Defaults* button when customizing a service to remove any
customizations.
!!! note
A **yellow** customize icon next to a service indicates that service
requires customization (e.g. the *Firewall* service).
A **green** customize icon indicates that a custom configuration exists.
Click the *Defaults* button when customizing a service to remove any
customizations.
The Files tab is used to display or edit the configuration files or scripts that
are used for this service. Files can be selected from a drop-down list, and
@ -90,10 +91,11 @@ per-node directories that are defined by the services. For example, the
the Zebra service, because Quagga running on each node needs to write separate
PID files to that directory.
> **NOTE:** The **/var/log** and **/var/run** directories are
mounted uniquely per-node by default.
Per-node mount targets can be found in **/tmp/pycore.nnnnn/nN.conf/**
(where *nnnnn* is the session number and *N* is the node number.)
!!! note
The **/var/log** and **/var/run** directories are
mounted uniquely per-node by default.
Per-node mount targets can be found in **/tmp/pycore.<session id>/<node name>.conf/**
The Startup/shutdown tab lists commands that are used to start and stop this
service. The startup index allows configuring when this service starts relative
@ -120,8 +122,10 @@ if a process is running and return zero when found. When a validate command
produces a non-zero return value, an exception is generated, which will cause
an error to be displayed in the Check Emulation Light.
> **NOTE:** To start, stop, and restart services during run-time, right-click a
node and use the *Services...* menu.
!!! note
To start, stop, and restart services during run-time, right-click a
node and use the *Services...* menu.
## New Services
@ -138,6 +142,12 @@ ideas for a service before adding a new service type.
### Creating New Services
!!! note
The directory name used in **custom_services_dir** below should be unique and
should not correspond to any existing Python module name. For example, don't
use the name **subprocess** or **services**.
1. Modify the example service shown below
to do what you want. It could generate config/script files, mount per-node
directories, start processes/scripts, etc. sample.py is a Python file that
@ -151,12 +161,6 @@ ideas for a service before adding a new service type.
3. Add a **custom_services_dir = `/home/<user>/.coregui/custom_services`** entry to the
/etc/core/core.conf file.
**NOTE:**
The directory name used in **custom_services_dir** should be unique and
should not correspond to
any existing Python module name. For example, don't use the name **subprocess**
or **services**.
4. Restart the CORE daemon (core-daemon). Any import errors (Python syntax)
should be displayed in the daemon output.

View file

@ -1,8 +1,5 @@
# BIRD Internet Routing Daemon
* Table of Contents
{:toc}
## Overview
The [BIRD Internet Routing Daemon](https://bird.network.cz/) is a routing
@ -30,6 +27,7 @@ sudo apt-get install bird
You can download BIRD source code from its
[official repository.](https://gitlab.labs.nic.cz/labs/bird/)
```shell
./configure
make
@ -37,6 +35,7 @@ su
make install
vi /etc/bird/bird.conf
```
The installation will place the bird directory inside */etc* where you will
also find its config file.

View file

@ -1,8 +1,5 @@
# EMANE Services
* Table of Contents
{:toc}
## Overview
EMANE related services for CORE.

View file

@ -1,13 +1,14 @@
# FRRouting
* Table of Contents
{:toc}
## Overview
FRRouting is a routing software package that provides TCP/IP based routing services with routing protocols support such as BGP, RIP, OSPF, IS-IS and more. FRR also supports special BGP Route Reflector and Route Server behavior. In addition to traditional IPv4 routing protocols, FRR also supports IPv6 routing protocols. With an SNMP daemon that supports the AgentX protocol, FRR provides routing protocol MIB read-only access (SNMP Support).
FRRouting is a routing software package that provides TCP/IP based routing services with routing protocols support such
as BGP, RIP, OSPF, IS-IS and more. FRR also supports special BGP Route Reflector and Route Server behavior. In addition
to traditional IPv4 routing protocols, FRR also supports IPv6 routing protocols. With an SNMP daemon that supports the
AgentX protocol, FRR provides routing protocol MIB read-only access (SNMP Support).
FRR (as of v7.2) currently supports the following protocols:
* BGPv4
* OSPFv2
* OSPFv3
@ -26,11 +27,13 @@ FRR (as of v7.2) currently supports the following protocols:
## FRRouting Package Install
Ubuntu 19.10 and later
```shell
sudo apt update && sudo apt install frr
```
Ubuntu 16.04 and Ubuntu 18.04
```shell
sudo apt install curl
curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
@ -38,25 +41,35 @@ FRRVER="frr-stable"
echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
sudo apt update && sudo apt install frr frr-pythontools
```
Fedora 31
```shell
sudo dnf update && sudo dnf install frr
```
## FRRouting Source Code Install
Building FRR from source is the best way to ensure you have the latest features and bug fixes. Details for each supported platform, including dependency package listings, permissions, and other gotchas, are in the developers documentation.
Building FRR from source is the best way to ensure you have the latest features and bug fixes. Details for each
supported platform, including dependency package listings, permissions, and other gotchas, are in the developers
documentation.
FRRs source is available on the project [GitHub page](https://github.com/FRRouting/frr).
```shell
git clone https://github.com/FRRouting/frr.git
```
Change into your FRR source directory and issue:
```shell
./bootstrap.sh
```
Then, choose the configuration options that you wish to use for the installation. You can find these options on FRR's [official webpage](http://docs.frrouting.org/en/latest/installation.html). Once you have chosen your configure options, run the configure script and pass the options you chose:
Then, choose the configuration options that you wish to use for the installation. You can find these options on
FRR's [official webpage](http://docs.frrouting.org/en/latest/installation.html). Once you have chosen your configure
options, run the configure script and pass the options you chose:
```shell
./configure \
--prefix=/usr \
@ -68,8 +81,11 @@ Then, choose the configuration options that you wish to use for the installation
--enable-watchfrr \
...
```
After configuring the software, you are ready to build and install it in your system.
```shell
make && sudo make install
```
If everything finishes successfully, FRR should be installed.

View file

@ -1,13 +1,21 @@
# NRL Services
* Table of Contents
{:toc}
## Overview
The Protean Protocol Prototyping Library (ProtoLib) is a cross-platform library that allows applications to be built while supporting a variety of platforms including Linux, Windows, WinCE/PocketPC, MacOS, FreeBSD, Solaris, etc as well as the simulation environments of NS2 and Opnet. The goal of the Protolib is to provide a set of simple, cross-platform C++ classes that allow development of network protocols and applications that can run on different platforms and in network simulation environments. While Protolib provides an overall framework for developing working protocol implementations, applications, and simulation modules, the individual classes are designed for use as stand-alone components when possible. Although Protolib is principally for research purposes, the code has been constructed to provide robust, efficient performance and adaptability to real applications. In some cases, the code consists of data structures, etc useful in protocol implementations and, in other cases, provides common, cross-platform interfaces to system services and functions (e.g., sockets, timers, routing tables, etc).
The Protean Protocol Prototyping Library (ProtoLib) is a cross-platform library that allows applications to be built
while supporting a variety of platforms including Linux, Windows, WinCE/PocketPC, MacOS, FreeBSD, Solaris, etc as well
as the simulation environments of NS2 and Opnet. The goal of the Protolib is to provide a set of simple, cross-platform
C++ classes that allow development of network protocols and applications that can run on different platforms and in
network simulation environments. While Protolib provides an overall framework for developing working protocol
implementations, applications, and simulation modules, the individual classes are designed for use as stand-alone
components when possible. Although Protolib is principally for research purposes, the code has been constructed to
provide robust, efficient performance and adaptability to real applications. In some cases, the code consists of data
structures, etc useful in protocol implementations and, in other cases, provides common, cross-platform interfaces to
system services and functions (e.g., sockets, timers, routing tables, etc).
Currently, the Naval Research Laboratory uses this library to develop a wide variety of protocols.The NRL Protolib
currently supports the following protocols:
Currently the Naval Research Laboratory uses this library to develop a wide variety of protocols.The NRL Protolib currently supports the following protocols:
* MGEN_Sink
* NHDP
* SMF
@ -19,11 +27,14 @@ Currently the Naval Research Laboratory uses this library to develop a wide vari
## NRL Installation
In order to be able to use the different protocols that NRL offers, you must first download the support library itself. You can get the source code from their [NRL Protolib Repo](https://github.com/USNavalResearchLaboratory/protolib).
In order to be able to use the different protocols that NRL offers, you must first download the support library itself.
You can get the source code from their [NRL Protolib Repo](https://github.com/USNavalResearchLaboratory/protolib).
## Multi-Generator (MGEN)
Download MGEN from the [NRL MGEN Repo](https://github.com/USNavalResearchLaboratory/mgen), unpack it and copy the protolib library into the main folder *mgen*. Execute the following commands to build the protocol.
Download MGEN from the [NRL MGEN Repo](https://github.com/USNavalResearchLaboratory/mgen), unpack it and copy the
protolib library into the main folder *mgen*. Execute the following commands to build the protocol.
```shell
cd mgen/makefiles
make -f Makefile.{os} mgen
@ -32,16 +43,22 @@ make -f Makefile.{os} mgen
## Neighborhood Discovery Protocol (NHDP)
Download NHDP from the [NRL NHDP Repo](https://github.com/USNavalResearchLaboratory/NCS-Downloads/tree/master/nhdp).
```shell
sudo apt-get install libpcap-dev libboost-all-dev
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.8.0/protoc-3.8.0-linux-x86_64.zip
unzip protoc-3.8.0-linux-x86_64.zip
```
Then place the binaries in your $PATH. To know your paths you can issue the following command
```shell
echo $PATH
```
Go to the downloaded *NHDP* tarball, unpack it and place the protolib library inside the NHDP main folder. Now, compile the NHDP Protocol.
Go to the downloaded *NHDP* tarball, unpack it and place the protolib library inside the NHDP main folder. Now, compile
the NHDP Protocol.
```shell
cd nhdp/unix
make -f Makefile.{os}
@ -49,7 +66,9 @@ make -f Makefile.{os}
## Simplified Multicast Forwarding (SMF)
Download SMF from the [NRL SMF Repo](https://github.com/USNavalResearchLaboratory/nrlsmf) , unpack it and place the protolib library inside the *smf* main folder.
Download SMF from the [NRL SMF Repo](https://github.com/USNavalResearchLaboratory/nrlsmf) , unpack it and place the
protolib library inside the *smf* main folder.
```shell
cd mgen/makefiles
make -f Makefile.{os}
@ -57,7 +76,10 @@ make -f Makefile.{os}
## Optimized Link State Routing Protocol (OLSR)
To install the OLSR protocol, download their source code from their [NRL OLSR Repo](https://github.com/USNavalResearchLaboratory/nrlolsr). Unpack it and place the previously downloaded protolib library inside the *nrlolsr* main directory. Then execute the following commands:
To install the OLSR protocol, download their source code from
their [NRL OLSR Repo](https://github.com/USNavalResearchLaboratory/nrlolsr). Unpack it and place the previously
downloaded protolib library inside the *nrlolsr* main directory. Then execute the following commands:
```shell
cd ./unix
make -f Makefile.{os}

View file

@ -1,12 +1,13 @@
# Quagga Routing Suite
* Table of Contents
{:toc}
## Overview
Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix platforms, particularly FreeBSD, Linux, Solaris and NetBSD. Quagga is a fork of GNU Zebra which was developed by Kunihiro Ishiguro.
The Quagga architecture consists of a core daemon, zebra, which acts as an abstraction layer to the underlying Unix kernel and presents the Zserv API over a Unix or TCP stream to Quagga clients. It is these Zserv clients which typically implement a routing protocol and communicate routing updates to the zebra daemon.
Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix
platforms, particularly FreeBSD, Linux, Solaris and NetBSD. Quagga is a fork of GNU Zebra which was developed by
Kunihiro Ishiguro.
The Quagga architecture consists of a core daemon, zebra, which acts as an abstraction layer to the underlying Unix
kernel and presents the Zserv API over a Unix or TCP stream to Quagga clients. It is these Zserv clients which typically
implement a routing protocol and communicate routing updates to the zebra daemon.
## Quagga Package Install
@ -17,10 +18,13 @@ sudo apt-get install quagga
## Quagga Source Install
First, download the source code from their [official webpage](https://www.quagga.net/).
```shell
sudo apt-get install gawk
```
Extract the tarball, go to the directory of your currently extracted code and issue the following commands.
```shell
./configure
make

View file

@ -1,11 +1,11 @@
# Software Defined Networking
* Table of Contents
{:toc}
## Overview
Ryu is a component-based software defined networking framework. Ryu provides software components with well defined API that make it easy for developers to create new network management and control applications. Ryu supports various protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About OpenFlow, Ryu supports fully 1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely available under the Apache 2.0 license.
Ryu is a component-based software defined networking framework. Ryu provides software components with well defined API
that make it easy for developers to create new network management and control applications. Ryu supports various
protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About OpenFlow, Ryu supports fully
1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely available under the Apache 2.0 license.
## Installation

View file

@ -1,15 +1,15 @@
# Security Services
* Table of Contents
{:toc}
## Overview
The security services offer a wide variety of protocols capable of satisfying the most use cases available. Security services such as IP security protocols, for providing security at the IP layer, as well as the suite of protocols designed to provide that security, through authentication and encryption of IP network packets. Virtual Private Networks (VPNs) and Firewalls are also available for use to the user.
The security services offer a wide variety of protocols capable of satisfying the most use cases available. Security
services such as IP security protocols, for providing security at the IP layer, as well as the suite of protocols
designed to provide that security, through authentication and encryption of IP network packets. Virtual Private
Networks (VPNs) and Firewalls are also available for use to the user.
## Installation
Libraries needed for some of the security services.
Libraries needed for some security services.
```shell
sudo apt-get install ipsec-tools racoon
@ -71,7 +71,9 @@ sudo cp pki/dh.pem $KEYDIR/dh1024.pem
Add VPNServer service to nodes desired for running an OpenVPN server.
Modify [sampleVPNServer](https://github.com/coreemu/core/blob/master/package/examples/services/sampleVPNServer) for the following
Modify [sampleVPNServer](https://github.com/coreemu/core/blob/master/package/examples/services/sampleVPNServer) for the
following
* Edit keydir key/cert directory
* Edit keyname to use generated server name above
* Edit vpnserver to match an address that the server node will have
@ -80,7 +82,9 @@ Modify [sampleVPNServer](https://github.com/coreemu/core/blob/master/package/exa
Add VPNClient service to nodes desired for acting as an OpenVPN client.
Modify [sampleVPNClient](https://github.com/coreemu/core/blob/master/package/examples/services/sampleVPNClient) for the following
Modify [sampleVPNClient](https://github.com/coreemu/core/blob/master/package/examples/services/sampleVPNClient) for the
following
* Edit keydir key/cert directory
* Edit keyname to use generated client name above
* Edit vpnserver to match the address a server was configured to use

View file

@ -1,13 +1,11 @@
# Utility Services
* Table of Contents
{:toc}
# Overview
## Overview
Variety of convenience services for carrying out common networking changes.
The following services are provided as utilities:
* UCARP
* IP Forward
* Default Routing
@ -25,15 +23,19 @@ The following services are provided as utilities:
## Installation
To install the functionality of the previously metioned services you can run the following command:
```shell
sudo apt-get install isc-dhcp-server apache2 libpcap-dev radvd at
```
## UCARP
UCARP allows a couple of hosts to share common virtual IP addresses in order to provide automatic failover. It is a portable userland implementation of the secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's alternative to the patents-bloated VRRP).
UCARP allows a couple of hosts to share common virtual IP addresses in order to provide automatic failover. It is a
portable userland implementation of the secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's
alternative to the patents-bloated VRRP).
Strong points of the CARP protocol are: very low overhead, cryptographically signed messages, interoperability between different operating systems and no need for any dedicated extra network link between redundant hosts.
Strong points of the CARP protocol are: very low overhead, cryptographically signed messages, interoperability between
different operating systems and no need for any dedicated extra network link between redundant hosts.
### Installation

View file

@ -1,36 +1,48 @@
# XORP routing suite
* Table of Contents
{:toc}
## Overview
XORP is an open networking platform that supports OSPF, RIP, BGP, OLSR, VRRP, PIM, IGMP (Multicast) and other routing protocols. Most protocols support IPv4 and IPv6 where applicable. It is known to work on various Linux distributions and flavors of BSD.
XORP is an open networking platform that supports OSPF, RIP, BGP, OLSR, VRRP, PIM, IGMP (Multicast) and other routing
protocols. Most protocols support IPv4 and IPv6 where applicable. It is known to work on various Linux distributions and
flavors of BSD.
XORP started life as a project at the ICSI Center for Open Networking (ICON) at the International Computer Science Institute in Berkeley, California, USA, and spent some time with the team at XORP, Inc. It is now maintained and improved on a volunteer basis by a core of long-term XORP developers and some newer contributors.
XORP started life as a project at the ICSI Center for Open Networking (ICON) at the International Computer Science
Institute in Berkeley, California, USA, and spent some time with the team at XORP, Inc. It is now maintained and
improved on a volunteer basis by a core of long-term XORP developers and some newer contributors.
XORP's primary goal is to be an open platform for networking protocol implementations and an alternative to proprietary and closed networking products in the marketplace today. It is the only open source platform to offer integrated multicast capability.
XORP's primary goal is to be an open platform for networking protocol implementations and an alternative to proprietary
and closed networking products in the marketplace today. It is the only open source platform to offer integrated
multicast capability.
XORP design philosophy is:
* modularity
* extensibility
* performance
* robustness
This is achieved by carefully separating functionalities into independent modules, and by providing an API for each module.
XORP divides into two subsystems. The higher-level ("user-level") subsystem consists of the routing protocols. The lower-level ("kernel") manages the forwarding path, and provides APIs for the higher-level to access.
* modularity
* extensibility
* performance
* robustness
This is achieved by carefully separating functionalities into independent modules, and by providing an API for each
module.
User-level XORP uses multi-process architecture with one process per routing protocol, and a novel inter-process communication mechanism called XRL (XORP Resource Locator).
XORP divides into two subsystems. The higher-level ("user-level") subsystem consists of the routing protocols. The
lower-level ("kernel") manages the forwarding path, and provides APIs for the higher-level to access.
The lower-level subsystem can use traditional UNIX kernel forwarding, or Click modular router. The modularity and independency of the lower-level from the user-level subsystem allows for its easily replacement with other solutions including high-end hardware-based forwarding engines.
User-level XORP uses multi-process architecture with one process per routing protocol, and a novel inter-process
communication mechanism called XRL (XORP Resource Locator).
The lower-level subsystem can use traditional UNIX kernel forwarding, or Click modular router. The modularity and
independency of the lower-level from the user-level subsystem allows for its easily replacement with other solutions
including high-end hardware-based forwarding engines.
## Installation
In order to be able to install the XORP Routing Suite, you must first install scons in order to compile it.
```shell
sudo apt-get install scons
```
Then, download XORP from its official [release web page](http://www.xorp.org/releases/current/).
```shell
http://www.xorp.org/releases/current/
cd xorp

57
mkdocs.yml Normal file
View file

@ -0,0 +1,57 @@
site_name: CORE Documentation
use_directory_urls: false
theme:
name: material
palette:
- scheme: slate
toggle:
icon: material/brightness-4
name: Switch to Light Mode
primary: teal
accent: teal
- scheme: default
toggle:
icon: material/brightness-7
name: Switch to Dark Mode
primary: teal
accent: teal
features:
- navigation.path
- navigation.instant
- navigation.footer
- content.code.copy
markdown_extensions:
- pymdownx.snippets:
base_path: docs
- admonition
- pymdownx.details
- pymdownx.superfences
- pymdownx.tabbed:
alternate_style: true
- pymdownx.inlinehilite
nav:
- Home: index.md
- Overview:
- Architecture: architecture.md
- Performance: performance.md
- Installation:
- Overview: install.md
- Ubuntu: install_ubuntu.md
- CentOS: install_centos.md
- Detailed Topics:
- GUI: gui.md
- Node Types:
- Overview: nodetypes.md
- Docker: docker.md
- LXC: lxc.md
- Services:
- Config Services: configservices.md
- Services (Deprecated): services.md
- API:
- Python: python.md
- gRPC: grpc.md
- Distributed: distributed.md
- Control Network: ctrlnet.md
- Hardware In The Loop: hitl.md
- EMANE: emane.md
- Developers Guide: devguide.md