commit
02d9418808
4 changed files with 494 additions and 211 deletions
|
@ -26,7 +26,7 @@ A CORE node is a lightweight virtual machine. The CORE framework runs on Linux.
|
|||
|
||||
### Linux
|
||||
|
||||
Linux network namespaces (also known as netns, LXC, or [Linux containers](http://lxc.sourceforge.net/)) is the primary virtualization technique used by CORE. LXC has been part of the mainline Linux kernel since 2.6.24. Most recent Linux distributions have namespaces-enabled kernels out of the box. A namespace is created using the ```clone()``` system call. Each namespace has its own process environment and private network stack. Network namespaces share the same filesystem in CORE.
|
||||
Linux network namespaces (also known as netns) is the primary virtualization technique used by CORE. Most recent Linux distributions have namespaces-enabled kernels out of the box. A namespace is created using the ```clone()``` system call. Each namespace has its own process environment and private network stack. Network namespaces share the same filesystem in CORE.
|
||||
|
||||
CORE combines these namespaces with Linux Ethernet bridging to form networks. Link characteristics are applied using Linux Netem queuing disciplines. Ebtables is Ethernet frame filtering on Linux bridges. Wireless networks are emulated by controlling which interfaces can send and receive with ebtables rules.
|
||||
|
||||
|
|
|
@ -44,17 +44,10 @@ Install Path | Description
|
|||
|
||||
# Pre-Req Python Requirements
|
||||
|
||||
The newly added gRPC API which depends on python library grpcio is not commonly found within system repos.
|
||||
To account for this it would be recommended to install the python dependencies using the **requirements.txt** found in
|
||||
the latest release.
|
||||
|
||||
```shell
|
||||
sudo pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Ubuntu 19.04
|
||||
|
||||
Ubuntu 19.04 can provide all the packages needed at the system level and can be installed as follows:
|
||||
|
||||
```shell
|
||||
# python 2
|
||||
sudo apt install python-configparser python-enum34 python-future python-grpcio python-lxml
|
||||
|
@ -62,6 +55,17 @@ sudo apt install python-configparser python-enum34 python-future python-grpcio p
|
|||
sudo apt install python3-configparser python3-enum34 python3-future python3-grpcio python3-lxml
|
||||
```
|
||||
|
||||
## Other Distros
|
||||
|
||||
The newly added gRPC API which depends on python library grpcio is not commonly found within system repos.
|
||||
To account for this it would be recommended to install the python dependencies using the **requirements.txt** found in
|
||||
the latest release.
|
||||
|
||||
```shell
|
||||
# will need to pip3 for python3 usage
|
||||
sudo pip install -r requirements.txt
|
||||
```
|
||||
|
||||
# Pre-Req Installing OSPF MDR
|
||||
|
||||
Virtual networks generally require some form of routing in order to work (e.g. to automatically populate routing
|
||||
|
@ -121,9 +125,9 @@ Ubuntu package defaults to using systemd for running as a service.
|
|||
|
||||
```shell
|
||||
# python2
|
||||
sudo apt ./core_python_$VERSION_amd64.deb
|
||||
sudo apt install ./core_python_$VERSION_amd64.deb
|
||||
# python3
|
||||
sudo apt ./core_python3_$VERSION_amd64.deb
|
||||
sudo apt install ./core_python3_$VERSION_amd64.deb
|
||||
```
|
||||
|
||||
Run the CORE GUI as a normal user:
|
||||
|
@ -202,7 +206,11 @@ After running the *core-gui* command, a GUI should appear with a canvas for draw
|
|||
This option is listed here for developers and advanced users who are comfortable patching and building source code.
|
||||
Please consider using the binary packages instead for a simplified install experience.
|
||||
|
||||
## Pre-Req All
|
||||
## Download and Extract Source Code
|
||||
|
||||
You can obtain the CORE source from the [CORE GitHub](https://github.com/coreemu/core) page.
|
||||
|
||||
## Install grpcio-tools
|
||||
|
||||
Python module grpcio-tools is currently needed to generate code from the CORE protobuf file during the build.
|
||||
|
||||
|
@ -213,19 +221,21 @@ pip2 install grpcio-tools
|
|||
pip3 install grpcio-tools
|
||||
```
|
||||
|
||||
## Pre-Reqs Ubuntu 18.04
|
||||
## Distro Requirements
|
||||
|
||||
### Ubuntu 18.04 Requirements
|
||||
|
||||
```shell
|
||||
sudo apt install automake pkg-config gcc libev-dev bridge-utils ebtables python-dev python-setuptools tk libtk-img
|
||||
```
|
||||
|
||||
## Pre-Reqs Ubuntu 16.04
|
||||
### Ubuntu 16.04 Requirements
|
||||
|
||||
```shell
|
||||
sudo apt-get install automake bridge-utils ebtables python-dev libev-dev python-setuptools libtk-img
|
||||
```
|
||||
|
||||
## Pre-Reqs CentOS 7
|
||||
### CentOS 7 with Gnome Desktop Requirements
|
||||
|
||||
```shell
|
||||
sudo yum -y install automake gcc python-devel libev-devel tk
|
||||
|
@ -235,15 +245,13 @@ sudo yum -y install automake gcc python-devel libev-devel tk
|
|||
|
||||
```shell
|
||||
./bootstrap.sh
|
||||
# for python2
|
||||
PYTHON=python2 ./configure
|
||||
# for python3
|
||||
PYTHON=python3 ./configure
|
||||
# use python2 or python3 depending on desired version
|
||||
PYTHON=$VERSION ./configure
|
||||
make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
## Build Documentation
|
||||
# Building Documentation
|
||||
|
||||
Building documentation requires python-sphinx not noted above.
|
||||
|
||||
|
@ -256,14 +264,12 @@ sudo apt install python3-sphinx
|
|||
sudo yum install python3-sphinx
|
||||
|
||||
./bootstrap.sh
|
||||
# for python2
|
||||
PYTHON=python2 ./configure
|
||||
# for python3
|
||||
PYTHON=python3 ./configure
|
||||
# use python2 or python3 depending on desired version
|
||||
PYTHON=$VERSION ./configure
|
||||
make doc
|
||||
```
|
||||
|
||||
## Build Packages
|
||||
# Building Packages
|
||||
Build package commands, DESTDIR is used to make install into and then for packaging by fpm.
|
||||
|
||||
**NOTE: clean the DESTDIR if re-using the same directory**
|
||||
|
@ -272,10 +278,8 @@ Build package commands, DESTDIR is used to make install into and then for packag
|
|||
|
||||
```shell
|
||||
./bootstrap.sh
|
||||
# for python2
|
||||
PYTHON=python2 ./configure
|
||||
# for python3
|
||||
PYTHON=python3 ./configure
|
||||
# use python2 or python3 depending on desired version
|
||||
PYTHON=$VERSION ./configure
|
||||
make
|
||||
mkdir /tmp/core-build
|
||||
make fpm DESTDIR=/tmp/core-build
|
||||
|
|
438
docs/services.md
438
docs/services.md
|
@ -3,13 +3,141 @@
|
|||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Custom Services
|
||||
## Services
|
||||
|
||||
CORE supports custom developed services by way of dynamically loading user created python files.
|
||||
Custom services should be placed within the path defined by **custom_services_dir** in the CORE
|
||||
configuration file. This path cannot end in **/services**.
|
||||
CORE uses the concept of services to specify what processes or scripts run on a
|
||||
node when it is started. Layer-3 nodes such as routers and PCs are defined by
|
||||
the services that they run.
|
||||
|
||||
Follow these steps to add your own services:
|
||||
Services may be customized for each node, or new custom services can be
|
||||
created. New node types can be created each having a different name, icon, and
|
||||
set of default services. Each service defines the per-node directories,
|
||||
configuration files, startup index, starting commands, validation commands,
|
||||
shutdown commands, and meta-data associated with a node.
|
||||
|
||||
**NOTE:**
|
||||
Network namespace nodes do not undergo the normal Linux boot process
|
||||
using the **init**, **upstart**, or **systemd** frameworks. These
|
||||
lightweight nodes use configured CORE *services*.
|
||||
|
||||
## Default Services and Node Types
|
||||
|
||||
Here are the default node types and their services:
|
||||
|
||||
* *router* - zebra, OSFPv2, OSPFv3, and IPForward services for IGP
|
||||
link-state routing.
|
||||
* *host* - DefaultRoute and SSH services, representing an SSH server having a
|
||||
default route when connected directly to a router.
|
||||
* *PC* - DefaultRoute service for having a default route when connected
|
||||
directly to a router.
|
||||
* *mdr* - zebra, OSPFv3MDR, and IPForward services for
|
||||
wireless-optimized MANET Designated Router routing.
|
||||
* *prouter* - a physical router, having the same default services as the
|
||||
*router* node type; for incorporating Linux testbed machines into an
|
||||
emulation.
|
||||
|
||||
Configuration files can be automatically generated by each service. For
|
||||
example, CORE automatically generates routing protocol configuration for the
|
||||
router nodes in order to simplify the creation of virtual networks.
|
||||
|
||||
To change the services associated with a node, double-click on the node to
|
||||
invoke its configuration dialog and click on the *Services...* button,
|
||||
or right-click a node a choose *Services...* from the menu.
|
||||
Services are enabled or disabled by clicking on their names. The button next to
|
||||
each service name allows you to customize all aspects of this service for this
|
||||
node. For example, special route redistribution commands could be inserted in
|
||||
to the Quagga routing configuration associated with the zebra service.
|
||||
|
||||
To change the default services associated with a node type, use the Node Types
|
||||
dialog available from the *Edit* button at the end of the Layer-3 nodes
|
||||
toolbar, or choose *Node types...* from the *Session* menu. Note that
|
||||
any new services selected are not applied to existing nodes if the nodes have
|
||||
been customized.
|
||||
|
||||
The node types are saved in a **~/.core/nodes.conf** file, not with the
|
||||
**.imn** file. Keep this in mind when changing the default services for
|
||||
existing node types; it may be better to simply create a new node type. It is
|
||||
recommended that you do not change the default built-in node types. The
|
||||
**nodes.conf** file can be copied between CORE machines to save your custom
|
||||
types.
|
||||
|
||||
## Customizing a Service
|
||||
|
||||
A service can be fully customized for a particular node. From the node's
|
||||
configuration dialog, click on the button next to the service name to invoke
|
||||
the service customization dialog for that service.
|
||||
The dialog has three tabs for configuring the different aspects of the service:
|
||||
files, directories, and startup/shutdown.
|
||||
|
||||
**NOTE:**
|
||||
A **yellow** customize icon next to a service indicates that service
|
||||
requires customization (e.g. the *Firewall* service).
|
||||
A **green** customize icon indicates that a custom configuration exists.
|
||||
Click the *Defaults* button when customizing a service to remove any
|
||||
customizations.
|
||||
|
||||
The Files tab is used to display or edit the configuration files or scripts that
|
||||
are used for this service. Files can be selected from a drop-down list, and
|
||||
their contents are displayed in a text entry below. The file contents are
|
||||
generated by the CORE daemon based on the network topology that exists at
|
||||
the time the customization dialog is invoked.
|
||||
|
||||
The Directories tab shows the per-node directories for this service. For the
|
||||
default types, CORE nodes share the same filesystem tree, except for these
|
||||
per-node directories that are defined by the services. For example, the
|
||||
**/var/run/quagga** directory needs to be unique for each node running
|
||||
the Zebra service, because Quagga running on each node needs to write separate
|
||||
PID files to that directory.
|
||||
|
||||
**NOTE:**
|
||||
The **/var/log** and **/var/run** directories are
|
||||
mounted uniquely per-node by default.
|
||||
Per-node mount targets can be found in **/tmp/pycore.nnnnn/nN.conf/**
|
||||
(where *nnnnn* is the session number and *N* is the node number.)
|
||||
|
||||
The Startup/shutdown tab lists commands that are used to start and stop this
|
||||
service. The startup index allows configuring when this service starts relative
|
||||
to the other services enabled for this node; a service with a lower startup
|
||||
index value is started before those with higher values. Because shell scripts
|
||||
generated by the Files tab will not have execute permissions set, the startup
|
||||
commands should include the shell name, with
|
||||
something like ```sh script.sh```.
|
||||
|
||||
Shutdown commands optionally terminate the process(es) associated with this
|
||||
service. Generally they send a kill signal to the running process using the
|
||||
*kill* or *killall* commands. If the service does not terminate
|
||||
the running processes using a shutdown command, the processes will be killed
|
||||
when the *vnoded* daemon is terminated (with *kill -9*) and
|
||||
the namespace destroyed. It is a good practice to
|
||||
specify shutdown commands, which will allow for proper process termination, and
|
||||
for run-time control of stopping and restarting services.
|
||||
|
||||
Validate commands are executed following the startup commands. A validate
|
||||
command can execute a process or script that should return zero if the service
|
||||
has started successfully, and have a non-zero return value for services that
|
||||
have had a problem starting. For example, the *pidof* command will check
|
||||
if a process is running and return zero when found. When a validate command
|
||||
produces a non-zero return value, an exception is generated, which will cause
|
||||
an error to be displayed in the Check Emulation Light.
|
||||
|
||||
**TIP:**
|
||||
To start, stop, and restart services during run-time, right-click a
|
||||
node and use the *Services...* menu.
|
||||
|
||||
## New Services
|
||||
|
||||
Services can save time required to configure nodes, especially if a number
|
||||
of nodes require similar configuration procedures. New services can be
|
||||
introduced to automate tasks.
|
||||
|
||||
### Leveraging UserDefined
|
||||
|
||||
The easiest way to capture the configuration of a new process into a service
|
||||
is by using the **UserDefined** service. This is a blank service where any
|
||||
aspect may be customized. The UserDefined service is convenient for testing
|
||||
ideas for a service before adding a new service type.
|
||||
|
||||
### Creating New Service
|
||||
|
||||
1. Modify the [Example Service File](/daemon/examples/myservices/sample.py)
|
||||
to do what you want. It could generate config/script files, mount per-node
|
||||
|
@ -24,6 +152,12 @@ Follow these steps to add your own services:
|
|||
|
||||
3. Add a **custom_services_dir = /home/username/.core/myservices** entry to the
|
||||
/etc/core/core.conf file.
|
||||
|
||||
**NOTE:**
|
||||
The directory name used in **custom_services_dir** should be unique and
|
||||
should not correspond to
|
||||
any existing Python module name. For example, don't use the name **subprocess**
|
||||
or **services**.
|
||||
|
||||
4. Restart the CORE daemon (core-daemon). Any import errors (Python syntax)
|
||||
should be displayed in the /var/log/core-daemon.log log file (or on screen).
|
||||
|
@ -31,3 +165,297 @@ Follow these steps to add your own services:
|
|||
5. Start using your custom service on your nodes. You can create a new node
|
||||
type that uses your service, or change the default services for an existing
|
||||
node type, or change individual nodes.
|
||||
|
||||
If you have created a new service type that may be useful to others, please
|
||||
consider contributing it to the CORE project.
|
||||
|
||||
## Available Services
|
||||
|
||||
### BIRD Internet Routing Daemon
|
||||
The [BIRD Internet Routing Daemon](https://bird.network.cz/) is a routing daemon; i.e., a software responsible for managing kernel packet forwarding tables. It aims to develop a dynamic IP routing daemon with full support of all modern routing protocols, easy to use configuration interface and powerful route filtering language, primarily targeted on (but not limited to) Linux and other UNIX-like systems and distributed under the GNU General Public License. BIRD has a free implementation of several well known and common routing and router-supplemental protocols, namely RIP, RIPng, OSPFv2, OSPFv3, BGP, BFD, and NDP/RA. BIRD supports IPv4 and IPv6 address families, Linux kernel and several BSD variants (tested on FreeBSD, NetBSD and OpenBSD). BIRD consists of bird daemon and birdc interactive CLI client used for supervision.
|
||||
|
||||
In order to be able to use the BIRD Internet Routing Protocol, you must first install the project on your machine.
|
||||
|
||||
|
||||
#### BIRD Package Install
|
||||
```shell
|
||||
sudo apt-get install bird
|
||||
```
|
||||
|
||||
#### BIRD Source Code Install
|
||||
You can download BIRD source code from it's [official repository.](https://gitlab.labs.nic.cz/labs/bird/)
|
||||
```shell
|
||||
./configure
|
||||
make
|
||||
su
|
||||
make install
|
||||
vi /etc/bird/bird.conf
|
||||
```
|
||||
The installation will place the bird directory inside */etc* where you will also find its config file.
|
||||
|
||||
In order to be able to do use the Bird Internet Routing Protocol, you must modify *bird.conf* due to the fact that the given configuration file is not configured beyond allowing the bird daemon to start, which means that nothing else will happen if you run it. Keeran Marquis has a very detailed example on [Configuring BGP using Bird on Ubuntu](https://blog.marquis.co/configuring-bgp-using-bird-on-ubuntu-14-04lts/) which can be used as a building block to implement your custom routing daemon.
|
||||
|
||||
|
||||
### FRRouting
|
||||
FRRouting is a routing software package that provides TCP/IP based routing services with routing protocols support such as BGP, RIP, OSPF, IS-IS and more. FRR also supports special BGP Route Reflector and Route Server behavior. In addition to traditional IPv4 routing protocols, FRR also supports IPv6 routing protocols. With an SNMP daemon that supports the AgentX protocol, FRR provides routing protocol MIB read-only access (SNMP Support).
|
||||
|
||||
FRR currently supports the following protocols:
|
||||
* BGP
|
||||
* OSPFv2
|
||||
* OSPFv3
|
||||
* RIPv1
|
||||
* RIPv2
|
||||
* RIPng
|
||||
* IS-IS
|
||||
* PIM-SM/MSDP
|
||||
* LDP
|
||||
* BFD
|
||||
* Babel
|
||||
* PBR
|
||||
* OpenFabric
|
||||
* EIGRP (alpha)
|
||||
* NHRP (alpha)
|
||||
|
||||
#### FRRouting Package Install
|
||||
```shell
|
||||
sudo apt install curl
|
||||
curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
|
||||
FRRVER="frr-stable"
|
||||
echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
|
||||
sudo apt update && sudo apt install frr frr-pythontools
|
||||
```
|
||||
|
||||
#### FRRouting Source Code Install
|
||||
Building FRR from source is the best way to ensure you have the latest features and bug fixes. Details for each supported platform, including dependency package listings, permissions, and other gotchas, are in the developer’s documentation.
|
||||
|
||||
FRR’s source is available on the project [GitHub page](https://github.com/FRRouting/frr).
|
||||
```shell
|
||||
git clone https://github.com/FRRouting/frr.git
|
||||
```
|
||||
|
||||
Change into your FRR source directory and issue:
|
||||
```shell
|
||||
./bootstrap.sh
|
||||
```
|
||||
Then, choose the configuration options that you wish to use for the installation. You can find these options on FRR's [official webpage](http://docs.frrouting.org/en/latest/installation.html). Once you have chosen your configure options, run the configure script and pass the options you chose:
|
||||
```shell
|
||||
./configure \
|
||||
--prefix=/usr \
|
||||
--enable-exampledir=/usr/share/doc/frr/examples/ \
|
||||
--localstatedir=/var/run/frr \
|
||||
--sbindir=/usr/lib/frr \
|
||||
--sysconfdir=/etc/frr \
|
||||
--enable-pimd \
|
||||
--enable-watchfrr \
|
||||
...
|
||||
```
|
||||
After configuring the software, you are ready to build and install it in your system.
|
||||
```shell
|
||||
make && sudo make install
|
||||
```
|
||||
If everything finishes successfully, FRR should be installed.
|
||||
|
||||
|
||||
### Docker
|
||||
Docker service allows running docker containers within CORE nodes.
|
||||
The running of Docker within a CORE node allows for additional extensibility to
|
||||
the CORE services. This allows network applications and protocols to be easily
|
||||
packaged and run on any node.
|
||||
|
||||
This service will add a new group to the services list. This will have a service called Docker which will just start the docker service within the node but not run anything. It will also scan all docker images on the host machine. If any are tagged with 'core' then they will be added as a service to the Docker group. The image will then be auto run if that service is selected.
|
||||
|
||||
This requires a recent version of Docker. This was tested using a PPA on Ubuntu with version 1.2.0. The version in the standard Ubuntu repo is to old for this purpose (we need --net host).
|
||||
|
||||
#### Docker Installation
|
||||
To use Docker services, you must first install the Docker python image. This is used to interface with Docker from the python service.
|
||||
|
||||
```shell
|
||||
sudo apt-get install docker.io
|
||||
sudo apt-get install python-pip
|
||||
pip install docker-py
|
||||
```
|
||||
Once everything runs successfully, a Docker group under services will appear. An example use case is to pull an image from [Docker](https://hub.docker.com/). A test image has been uploaded for this purpose:
|
||||
```shell
|
||||
sudo docker pull stuartmarsden/multicastping
|
||||
```
|
||||
This downloads an image which is based on Ubuntu 14.04 with python and twisted. It runs a simple program that sends a multicast ping and listens and records any it receives. In order for this to appear as a docker service it must be tagged with core.
|
||||
Find out the id by running 'sudo docker images'. You should see all installed images and the one you want looks like this:
|
||||
```shell
|
||||
stuartmarsden/multicastping latest 4833487e66d2 20 hours
|
||||
ago 487 MB
|
||||
```
|
||||
The id will be different on your machine so use it in the following command:
|
||||
```shell
|
||||
sudo docker tag 4833487e66d2 stuartmarsden/multicastping:core
|
||||
```
|
||||
This image will be listed in the services after we restart the core-daemon:
|
||||
```shell
|
||||
sudo service core-daemon restart
|
||||
```
|
||||
|
||||
### NRL Services
|
||||
The Protean Protocol Prototyping Library (ProtoLib) is a cross-platform library that allows applications to be built while supporting a variety of platforms including Linux, Windows, WinCE/PocketPC, MacOS, FreeBSD, Solaris, etc as well as the simulation environments of NS2 and Opnet. The goal of the Protolib is to provide a set of simple, cross-platform C++ classes that allow development of network protocols and applications that can run on different platforms and in network simulation environments. While Protolib provides an overall framework for developing working protocol implementations, applications, and simulation modules, the individual classes are designed for use as stand-alone components when possible. Although Protolib is principally for research purposes, the code has been constructed to provide robust, efficient performance and adaptability to real applications. In some cases, the code consists of data structures, etc useful in protocol implementations and, in other cases, provides common, cross-platform interfaces to system services and functions (e.g., sockets, timers, routing tables, etc).
|
||||
|
||||
Currently the Naval Research Laboratory uses this library to develop a wide variety of protocols.The NRL Protolib currently supports the following protocols:
|
||||
* MGEN_Sink
|
||||
* NHDP
|
||||
* SMF
|
||||
* OLSR
|
||||
* OLSRv2
|
||||
* OLSRORG
|
||||
* MgenActor
|
||||
* arouted
|
||||
|
||||
#### NRL Installation
|
||||
In order to be able to use the different protocols that NRL offers, you must first download the support library itself. You can get the source code from their [official nightly snapshots website](https://downloads.pf.itd.nrl.navy.mil/protolib/nightly_snapshots/).
|
||||
|
||||
#### Multi-Generator (MGEN)
|
||||
Download MGEN from the [NRL MGEN nightly snapshots](https://downloads.pf.itd.nrl.navy.mil/mgen/nightly_snapshots/), unpack it and copy the protolib library into the main folder *mgen*. Execute the following commands to build the protocol.
|
||||
```shell
|
||||
cd mgen/makefiles
|
||||
make -f Makefile.{os} mgen
|
||||
```
|
||||
|
||||
#### Neighborhood Discovery Protocol (NHDP)
|
||||
Download NHDP from the [NRL NHDP nightly snapshots](https://downloads.pf.itd.nrl.navy.mil/nhdp/nightly_snapshots/).
|
||||
```shell
|
||||
sudo apt-get install libpcap-dev libboost-all-dev
|
||||
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.8.0/protoc-3.8.0-linux-x86_64.zip
|
||||
unzip protoc-3.8.0-linux-x86_64.zip
|
||||
```
|
||||
Then place the binaries in your $PATH. To know your paths you can issue the following command
|
||||
```shell
|
||||
echo $PATH
|
||||
```
|
||||
Go to the downloaded *NHDP* tarball, unpack it and place the protolib library inside the NHDP main folder. Now, compile the NHDP Protocol.
|
||||
```shell
|
||||
cd nhdp/unix
|
||||
make -f Makefile.{os}
|
||||
```
|
||||
|
||||
#### Simplified Multicast Forwarding (SMF)
|
||||
Download SMF from the [NRL SMF nightly snapshot](https://downloads.pf.itd.nrl.navy.mil/smf/nightly_snapshots/) , unpack it and place the protolib library inside the *smf* main folder.
|
||||
```shell
|
||||
cd mgen/makefiles
|
||||
make -f Makefile.{os}
|
||||
```
|
||||
|
||||
#### Optimized Link State Routing Protocol (OLSR)
|
||||
To install the OLSR protocol, download their source code from their [nightly snapshots](https://downloads.pf.itd.nrl.navy.mil/olsr/nightly_snapshots/nrlolsr-svnsnap.tgz). Unpack it and place the previously downloaded protolib library inside the *nrlolsr* main directory. Then execute the following commands:
|
||||
```shell
|
||||
cd ./unix
|
||||
make -f Makefile.{os}
|
||||
```
|
||||
|
||||
### Quagga Routing Suite
|
||||
Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix platforms, particularly FreeBSD, Linux, Solaris and NetBSD. Quagga is a fork of GNU Zebra which was developed by Kunihiro Ishiguro.
|
||||
The Quagga architecture consists of a core daemon, zebra, which acts as an abstraction layer to the underlying Unix kernel and presents the Zserv API over a Unix or TCP stream to Quagga clients. It is these Zserv clients which typically implement a routing protocol and communicate routing updates to the zebra daemon.
|
||||
|
||||
#### Quagga Package Install
|
||||
```shell
|
||||
sudo apt-get install quagga
|
||||
```
|
||||
|
||||
#### Quagga Source Install
|
||||
First, download the source code from their [official webpage](https://www.quagga.net/).
|
||||
```shell
|
||||
sudo apt-get install gawk
|
||||
```
|
||||
Extract the tarball, go to the directory of your currently extracted code and issue the following commands.
|
||||
```shell
|
||||
./configure
|
||||
make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
### Software Defined Networking
|
||||
Ryu is a component-based software defined networking framework. Ryu provides software components with well defined API that make it easy for developers to create new network management and control applications. Ryu supports various protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About OpenFlow, Ryu supports fully 1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely available under the Apache 2.0 license.
|
||||
```shell
|
||||
```
|
||||
|
||||
#### Installation
|
||||
##### Prerequisites
|
||||
```shell
|
||||
sudo apt-get install gcc python-dev libffi-dev libssl-dev libxml2-dev libxslt1-dev zlib1g-dev
|
||||
```
|
||||
##### Ryu Package Install
|
||||
```shell
|
||||
pip install ryu
|
||||
```
|
||||
##### Ryu Source Install
|
||||
```shell
|
||||
git clone git://github.com/osrg/ryu.git
|
||||
cd ryu; pip install .
|
||||
```
|
||||
|
||||
### Security Services
|
||||
The security services offer a wide variety of protocols capable of satisfying the most use cases available. Security services such as IP security protocols, for providing security at the IP layer, as well as the suite of protocols designed to provide that security, through authentication and encryption of IP network packets. Virtual Private Networks (VPNs) and Firewalls are also available for use to the user.
|
||||
|
||||
#### Installation
|
||||
```shell
|
||||
sudo apt-get install ipsec-tools racoon openvpn
|
||||
```
|
||||
|
||||
### UCARP
|
||||
UCARP allows a couple of hosts to share common virtual IP addresses in order to provide automatic failover. It is a portable userland implementation of the secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's alternative to the patents-bloated VRRP).
|
||||
|
||||
Strong points of the CARP protocol are: very low overhead, cryptographically signed messages, interoperability between different operating systems and no need for any dedicated extra network link between redundant hosts.
|
||||
|
||||
#### Installation
|
||||
```shell
|
||||
sudo apt-get install ucarp
|
||||
```
|
||||
|
||||
### Utilities Services
|
||||
The following services are provided as utilities:
|
||||
* Default Routing
|
||||
* Default Muticast Routing
|
||||
* Static Routing
|
||||
* SSH
|
||||
* DHCP
|
||||
* DHCP Client
|
||||
* FTP
|
||||
* HTTP
|
||||
* PCAP
|
||||
* RADVD
|
||||
* ATD
|
||||
|
||||
#### Installation
|
||||
To install the functionality of the previously metioned services you can run the following command:
|
||||
```shell
|
||||
sudo apt-get install isc-dhcp-server apache2 libpcap-dev radvd at
|
||||
```
|
||||
|
||||
### XORP routing suite
|
||||
XORP is an open networking platform that supports OSPF, RIP, BGP, OLSR, VRRP, PIM, IGMP (Multicast) and other routing protocols. Most protocols support IPv4 and IPv6 where applicable. It is known to work on various Linux distributions and flavors of BSD.
|
||||
|
||||
XORP started life as a project at the ICSI Center for Open Networking (ICON) at the International Computer Science Institute in Berkeley, California, USA, and spent some time with the team at XORP, Inc. It is now maintained and improved on a volunteer basis by a core of long-term XORP developers and some newer contributors.
|
||||
|
||||
XORP's primary goal is to be an open platform for networking protocol implementations and an alternative to proprietary and closed networking products in the marketplace today. It is the only open source platform to offer integrated multicast capability.
|
||||
|
||||
XORP design philosophy is:
|
||||
* modularity
|
||||
* extensibility
|
||||
* performance
|
||||
* robustness
|
||||
This is achieved by carefully separating functionalities into independent modules, and by providing an API for each module.
|
||||
|
||||
XORP divides into two subsystems. The higher-level ("user-level") subsystem consists of the routing protocols. The lower-level ("kernel") manages the forwarding path, and provides APIs for the higher-level to access.
|
||||
|
||||
User-level XORP uses multi-process architecture with one process per routing protocol, and a novel inter-process communication mechanism called XRL (XORP Resource Locator).
|
||||
|
||||
The lower-level subsystem can use traditional UNIX kernel forwarding, or Click modular router. The modularity and independency of the lower-level from the user-level subsystem allows for its easily replacement with other solutions including high-end hardware-based forwarding engines.
|
||||
|
||||
#### Installation
|
||||
In order to be able to install the XORP Routing Suite, you must first install scons in order to compile it.
|
||||
```shell
|
||||
sudo apt-get install scons
|
||||
```
|
||||
Then, download XORP from its official [release web page](http://www.xorp.org/releases/current/).
|
||||
```shell
|
||||
http://www.xorp.org/releases/current/
|
||||
cd xorp
|
||||
sudo apt-get install libssl-dev ncurses-dev
|
||||
scons
|
||||
scons install
|
||||
```
|
||||
|
|
205
docs/usage.md
205
docs/usage.md
|
@ -23,7 +23,7 @@ __Note: The CORE GUI is currently in a state of transition. The replacement can
|
|||
|
||||
## Prerequisites
|
||||
|
||||
Beyond instaling CORE, you must have the CORE daemon running. This is done on the command line with either Systemd or SysV
|
||||
Beyond installing CORE, you must have the CORE daemon running. This is done on the command line with either Systemd or SysV
|
||||
```shell
|
||||
# systed
|
||||
sudo systemctl daemon-reload
|
||||
|
@ -69,51 +69,51 @@ The toolbar is a row of buttons that runs vertically along the left side of the
|
|||
|
||||
When CORE is in Edit mode (the default), the vertical Editing Toolbar exists on the left side of the CORE window. Below are brief descriptions for each toolbar item, starting from the top. Most of the tools are grouped into related sub-menus, which appear when you click on their group icon.
|
||||
|
||||
* |select| *Selection Tool* - default tool for selecting, moving, configuring nodes
|
||||
* |start| *Start button* - starts Execute mode, instantiates the emulation
|
||||
* |link| *Link* - the Link Tool allows network links to be drawn between two nodes by clicking and dragging the mouse
|
||||
* |router| *Network-layer virtual nodes*
|
||||
* |router| *Router* - runs Quagga OSPFv2 and OSPFv3 routing to forward packets
|
||||
* |host| *Host* - emulated server machine having a default route, runs SSH server
|
||||
* |pc| *PC* - basic emulated machine having a default route, runs no processes by default
|
||||
* |mdr| *MDR* - runs Quagga OSPFv3 MDR routing for MANET-optimized routing
|
||||
* |router_green| *PRouter* - physical router represents a real testbed machine
|
||||
* |document_properties| *Edit* - edit node types button invokes the CORE Node Types dialog. New types of nodes may be created having different icons and names. The default services that are started with each node type can be changed here.
|
||||
* |hub| *Link-layer nodes*
|
||||
* |hub| *Hub* - the Ethernet hub forwards incoming packets to every connected node
|
||||
* |lanswitch| *Switch* - the Ethernet switch intelligently forwards incoming packets to attached hosts using an Ethernet address hash table
|
||||
* |wlan| *Wireless LAN* - when routers are connected to this WLAN node, they join a wireless network and an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between attached wireless nodes based on the distance between them
|
||||
* |rj45| *RJ45* - with the RJ45 Physical Interface Tool, emulated nodes can be linked to real physical interfaces; using this tool, real networks and devices can be physically connected to the live-running emulation
|
||||
* |tunnel| *Tunnel* - the Tunnel Tool allows connecting together more than one CORE emulation using GRE tunnels
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/select.gif) *Selection Tool* - default tool for selecting, moving, configuring nodes
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/start.gif) *Start button* - starts Execute mode, instantiates the emulation
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/link.gif) *Link* - the Link Tool allows network links to be drawn between two nodes by clicking and dragging the mouse
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/router.gif) *Network-layer virtual nodes*
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/router.gif) *Router* - runs Quagga OSPFv2 and OSPFv3 routing to forward packets
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/host.gif) *Host* - emulated server machine having a default route, runs SSH server
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/pc.gif) *PC* - basic emulated machine having a default route, runs no processes by default
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/mdr.gif) *MDR* - runs Quagga OSPFv3 MDR routing for MANET-optimized routing
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/router_green.gif) *PRouter* - physical router represents a real testbed machine
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/document-properties.gif) *Edit* - edit node types button invokes the CORE Node Types dialog. New types of nodes may be created having different icons and names. The default services that are started with each node type can be changed here.
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/hub.gif) *Link-layer nodes*
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/hub.gif) *Hub* - the Ethernet hub forwards incoming packets to every connected node
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/lanswitch.gif) *Switch* - the Ethernet switch intelligently forwards incoming packets to attached hosts using an Ethernet address hash table
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/wlan.gif) *Wireless LAN* - when routers are connected to this WLAN node, they join a wireless network and an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between attached wireless nodes based on the distance between them
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/rj45.gif) *RJ45* - with the RJ45 Physical Interface Tool, emulated nodes can be linked to real physical interfaces; using this tool, real networks and devices can be physically connected to the live-running emulation
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/tunnel.gif) *Tunnel* - the Tunnel Tool allows connecting together more than one CORE emulation using GRE tunnels
|
||||
* *Annotation Tools*
|
||||
* |marker| *Marker* - for drawing marks on the canvas
|
||||
* |oval| *Oval* - for drawing circles on the canvas that appear in the background
|
||||
* |rectangle| *Rectangle* - for drawing rectangles on the canvas that appear in the background
|
||||
* |text| *Text* - for placing text captions on the canvas
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/marker.gif) *Marker* - for drawing marks on the canvas
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/oval.gif) *Oval* - for drawing circles on the canvas that appear in the background
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/rectangle.gif) *Rectangle* - for drawing rectangles on the canvas that appear in the background
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/text.gif) *Text* - for placing text captions on the canvas
|
||||
|
||||
### Execution Toolbar
|
||||
|
||||
When the Start button is pressed, CORE switches to Execute mode, and the Edit toolbar on the left of the CORE window is replaced with the Execution toolbar Below are the items on this toolbar, starting from the top.
|
||||
|
||||
* |select| *Selection Tool* - in Execute mode, the Selection Tool can be used for moving nodes around the canvas, and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up menu of run-time options for that node
|
||||
* |stop| *Stop button* - stops Execute mode, terminates the emulation, returns CORE to edit mode.
|
||||
* |observe| *Observer Widgets Tool* - clicking on this magnifying glass icon
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/select.gif) *Selection Tool* - in Execute mode, the Selection Tool can be used for moving nodes around the canvas, and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up menu of run-time options for that node
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/stop.gif) *Stop button* - stops Execute mode, terminates the emulation, returns CORE to edit mode.
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/observe.gif) *Observer Widgets Tool* - clicking on this magnifying glass icon
|
||||
invokes a menu for easily selecting an Observer Widget. The icon has a darker
|
||||
gray background when an Observer Widget is active, during which time moving
|
||||
the mouse over a node will pop up an information display for that node.
|
||||
* |plot| *Plot Tool* - with this tool enabled, clicking on any link will
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/plot.gif) *Plot Tool* - with this tool enabled, clicking on any link will
|
||||
activate the Throughput Widget and draw a small, scrolling throughput plot
|
||||
on the canvas. The plot shows the real-time kbps traffic for that link.
|
||||
The plots may be dragged around the canvas; right-click on a
|
||||
plot to remove it.
|
||||
* |marker| *Marker* - for drawing freehand lines on the canvas, useful during
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/marker.gif) *Marker* - for drawing freehand lines on the canvas, useful during
|
||||
demonstrations; markings are not saved
|
||||
* |twonode| *Two-node Tool* - click to choose a starting and ending node, and
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/twonode.gif) *Two-node Tool* - click to choose a starting and ending node, and
|
||||
run a one-time *traceroute* between those nodes or a continuous *ping -R*
|
||||
between nodes. The output is displayed in real time in a results box, while
|
||||
the IP addresses are parsed and the complete network path is highlighted on
|
||||
the CORE display.
|
||||
* |run| *Run Tool* - this tool allows easily running a command on all or a
|
||||
* ![alt text](https://github.com/coreemu/core/blob/master/gui/icons/tiny/run.gif) *Run Tool* - this tool allows easily running a command on all or a
|
||||
subset of all nodes. A list box allows selecting any of the nodes. A text
|
||||
entry box allows entering any command. The command should return immediately,
|
||||
otherwise the display will block awaiting response. The *ping* command, for
|
||||
|
@ -834,155 +834,6 @@ to the Linux bridging and ebtables rules that are used.
|
|||
The basic range wireless model does not support distributed emulation,
|
||||
but EMANE does.
|
||||
|
||||
## Services
|
||||
|
||||
CORE uses the concept of services to specify what processes or scripts run on a
|
||||
node when it is started. Layer-3 nodes such as routers and PCs are defined by
|
||||
the services that they run.
|
||||
|
||||
Services may be customized for each node, or new custom services can be
|
||||
created. New node types can be created each having a different name, icon, and
|
||||
set of default services. Each service defines the per-node directories,
|
||||
configuration files, startup index, starting commands, validation commands,
|
||||
shutdown commands, and meta-data associated with a node.
|
||||
|
||||
**NOTE:**
|
||||
Network namespace nodes do not undergo the normal Linux boot process
|
||||
using the **init**, **upstart**, or **systemd** frameworks. These
|
||||
lightweight nodes use configured CORE *services*.
|
||||
|
||||
### Default Services and Node Types
|
||||
|
||||
Here are the default node types and their services:
|
||||
|
||||
* *router* - zebra, OSFPv2, OSPFv3, and IPForward services for IGP
|
||||
link-state routing.
|
||||
* *host* - DefaultRoute and SSH services, representing an SSH server having a
|
||||
default route when connected directly to a router.
|
||||
* *PC* - DefaultRoute service for having a default route when connected
|
||||
directly to a router.
|
||||
* *mdr* - zebra, OSPFv3MDR, and IPForward services for
|
||||
wireless-optimized MANET Designated Router routing.
|
||||
* *prouter* - a physical router, having the same default services as the
|
||||
*router* node type; for incorporating Linux testbed machines into an
|
||||
emulation.
|
||||
|
||||
Configuration files can be automatically generated by each service. For
|
||||
example, CORE automatically generates routing protocol configuration for the
|
||||
router nodes in order to simplify the creation of virtual networks.
|
||||
|
||||
To change the services associated with a node, double-click on the node to
|
||||
invoke its configuration dialog and click on the *Services...* button,
|
||||
or right-click a node a choose *Services...* from the menu.
|
||||
Services are enabled or disabled by clicking on their names. The button next to
|
||||
each service name allows you to customize all aspects of this service for this
|
||||
node. For example, special route redistribution commands could be inserted in
|
||||
to the Quagga routing configuration associated with the zebra service.
|
||||
|
||||
To change the default services associated with a node type, use the Node Types
|
||||
dialog available from the *Edit* button at the end of the Layer-3 nodes
|
||||
toolbar, or choose *Node types...* from the *Session* menu. Note that
|
||||
any new services selected are not applied to existing nodes if the nodes have
|
||||
been customized.
|
||||
|
||||
The node types are saved in a **~/.core/nodes.conf** file, not with the
|
||||
**.imn** file. Keep this in mind when changing the default services for
|
||||
existing node types; it may be better to simply create a new node type. It is
|
||||
recommended that you do not change the default built-in node types. The
|
||||
**nodes.conf** file can be copied between CORE machines to save your custom
|
||||
types.
|
||||
|
||||
### Customizing a Service
|
||||
|
||||
A service can be fully customized for a particular node. From the node's
|
||||
configuration dialog, click on the button next to the service name to invoke
|
||||
the service customization dialog for that service.
|
||||
The dialog has three tabs for configuring the different aspects of the service:
|
||||
files, directories, and startup/shutdown.
|
||||
|
||||
**NOTE:**
|
||||
A **yellow** customize icon next to a service indicates that service
|
||||
requires customization (e.g. the *Firewall* service).
|
||||
A **green** customize icon indicates that a custom configuration exists.
|
||||
Click the *Defaults* button when customizing a service to remove any
|
||||
customizations.
|
||||
|
||||
The Files tab is used to display or edit the configuration files or scripts that
|
||||
are used for this service. Files can be selected from a drop-down list, and
|
||||
their contents are displayed in a text entry below. The file contents are
|
||||
generated by the CORE daemon based on the network topology that exists at
|
||||
the time the customization dialog is invoked.
|
||||
|
||||
The Directories tab shows the per-node directories for this service. For the
|
||||
default types, CORE nodes share the same filesystem tree, except for these
|
||||
per-node directories that are defined by the services. For example, the
|
||||
**/var/run/quagga** directory needs to be unique for each node running
|
||||
the Zebra service, because Quagga running on each node needs to write separate
|
||||
PID files to that directory.
|
||||
|
||||
**NOTE:**
|
||||
The **/var/log** and **/var/run** directories are
|
||||
mounted uniquely per-node by default.
|
||||
Per-node mount targets can be found in **/tmp/pycore.nnnnn/nN.conf/**
|
||||
(where *nnnnn* is the session number and *N* is the node number.)
|
||||
|
||||
The Startup/shutdown tab lists commands that are used to start and stop this
|
||||
service. The startup index allows configuring when this service starts relative
|
||||
to the other services enabled for this node; a service with a lower startup
|
||||
index value is started before those with higher values. Because shell scripts
|
||||
generated by the Files tab will not have execute permissions set, the startup
|
||||
commands should include the shell name, with
|
||||
something like ```sh script.sh```.
|
||||
|
||||
Shutdown commands optionally terminate the process(es) associated with this
|
||||
service. Generally they send a kill signal to the running process using the
|
||||
*kill* or *killall* commands. If the service does not terminate
|
||||
the running processes using a shutdown command, the processes will be killed
|
||||
when the *vnoded* daemon is terminated (with *kill -9*) and
|
||||
the namespace destroyed. It is a good practice to
|
||||
specify shutdown commands, which will allow for proper process termination, and
|
||||
for run-time control of stopping and restarting services.
|
||||
|
||||
Validate commands are executed following the startup commands. A validate
|
||||
command can execute a process or script that should return zero if the service
|
||||
has started successfully, and have a non-zero return value for services that
|
||||
have had a problem starting. For example, the *pidof* command will check
|
||||
if a process is running and return zero when found. When a validate command
|
||||
produces a non-zero return value, an exception is generated, which will cause
|
||||
an error to be displayed in the Check Emulation Light.
|
||||
|
||||
**TIP:**
|
||||
To start, stop, and restart services during run-time, right-click a
|
||||
node and use the *Services...* menu.
|
||||
|
||||
### Creating new Services
|
||||
|
||||
Services can save time required to configure nodes, especially if a number
|
||||
of nodes require similar configuration procedures. New services can be
|
||||
introduced to automate tasks.
|
||||
|
||||
The easiest way to capture the configuration of a new process into a service
|
||||
is by using the **UserDefined** service. This is a blank service where any
|
||||
aspect may be customized. The UserDefined service is convenient for testing
|
||||
ideas for a service before adding a new service type.
|
||||
|
||||
To introduce new service types, a **myservices/** directory exists in the
|
||||
user's CORE configuration directory, at **~/.core/myservices/**. A detailed
|
||||
**README.txt** file exists in that directory to outline the steps necessary
|
||||
for adding a new service. First, you need to create a small Python file that
|
||||
defines the service; then the **custom_services_dir** entry must be set
|
||||
in the **/etc/core/core.conf** configuration file. A sample is provided in
|
||||
the **myservices/** directory.
|
||||
|
||||
**NOTE:**
|
||||
The directory name used in **custom_services_dir** should be unique and
|
||||
should not correspond to
|
||||
any existing Python module name. For example, don't use the name **subprocess**
|
||||
or **services**.
|
||||
|
||||
If you have created a new service type that may be useful to others, please
|
||||
consider contributing it to the CORE project.
|
||||
|
||||
## Check Emulation Light
|
||||
|
||||
The |cel| Check Emulation Light, or CEL, is located in the bottom right-hand corner
|
||||
|
|
Loading…
Reference in a new issue