docs: initial changes to support using mkdocs material
This commit is contained in:
parent
785cf82ba3
commit
078e0df329
20 changed files with 253 additions and 187 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -18,6 +18,9 @@ configure~
|
|||
debian
|
||||
stamp-h1
|
||||
|
||||
# python virtual environments
|
||||
venv
|
||||
|
||||
# generated protobuf files
|
||||
*_pb2.py
|
||||
*_pb2_grpc.py
|
||||
|
|
|
@ -1,25 +1,22 @@
|
|||
# CORE Architecture
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Main Components
|
||||
|
||||
* core-daemon
|
||||
* Manages emulated sessions of nodes and links for a given network
|
||||
* Nodes are created using Linux namespaces
|
||||
* Links are created using Linux bridges and virtual ethernet peers
|
||||
* Packets sent over links are manipulated using traffic control
|
||||
* Provides gRPC API
|
||||
* Manages emulated sessions of nodes and links for a given network
|
||||
* Nodes are created using Linux namespaces
|
||||
* Links are created using Linux bridges and virtual ethernet peers
|
||||
* Packets sent over links are manipulated using traffic control
|
||||
* Provides gRPC API
|
||||
* core-gui
|
||||
* GUI and daemon communicate over gRPC API
|
||||
* Drag and drop creation for nodes and links
|
||||
* Can launch terminals for emulated nodes in running sessions
|
||||
* Can save/open scenario files to recreate previous sessions
|
||||
* GUI and daemon communicate over gRPC API
|
||||
* Drag and drop creation for nodes and links
|
||||
* Can launch terminals for emulated nodes in running sessions
|
||||
* Can save/open scenario files to recreate previous sessions
|
||||
* vnoded
|
||||
* Command line utility for creating CORE node namespaces
|
||||
* Command line utility for creating CORE node namespaces
|
||||
* vcmd
|
||||
* Command line utility for sending shell commands to nodes
|
||||
* Command line utility for sending shell commands to nodes
|
||||
|
||||
![](static/architecture.png)
|
||||
|
||||
|
@ -57,5 +54,5 @@ rules.
|
|||
CORE has been released by Boeing to the open source community under the BSD
|
||||
license. If you find CORE useful for your work, please contribute back to the
|
||||
project. Contributions can be as simple as reporting a bug, dropping a line of
|
||||
encouragement, or can also include submitting patches or maintaining aspects
|
||||
encouragement, or can also include submitting patches or maintaining aspects
|
||||
of the tool.
|
||||
|
|
|
@ -1,7 +1,4 @@
|
|||
# CORE Config Services
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
# Config Services
|
||||
|
||||
## Overview
|
||||
|
||||
|
@ -15,6 +12,7 @@ CORE services are a convenience for creating reusable dynamic scripts
|
|||
to run on nodes, for carrying out specific task(s).
|
||||
|
||||
This boilds down to the following functions:
|
||||
|
||||
* generating files the service will use, either directly for commands or for configuration
|
||||
* command(s) for starting a service
|
||||
* command(s) for validating a service
|
||||
|
@ -121,6 +119,7 @@ from typing import Dict, List
|
|||
from core.config import ConfigString, ConfigBool, Configuration
|
||||
from core.configservice.base import ConfigService, ConfigServiceMode, ShadowDir
|
||||
|
||||
|
||||
# class that subclasses ConfigService
|
||||
class ExampleService(ConfigService):
|
||||
# unique name for your service within CORE
|
||||
|
@ -129,7 +128,7 @@ class ExampleService(ConfigService):
|
|||
group: str = "ExampleGroup"
|
||||
# directories that the service should shadow mount, hiding the system directory
|
||||
directories: List[str] = [
|
||||
"/usr/local/core",
|
||||
"/usr/local/core",
|
||||
]
|
||||
# files that this service should generate, defaults to nodes home directory
|
||||
# or can provide an absolute path to a mounted directory
|
||||
|
|
|
@ -1,13 +1,10 @@
|
|||
# CORE Control Network
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
The CORE control network allows the virtual nodes to communicate with their
|
||||
host environment. There are two types: the primary control network and
|
||||
auxiliary control networks. The primary control network is used mainly for
|
||||
auxiliary control networks. The primary control network is used mainly for
|
||||
communicating with the virtual nodes from host machines and for master-slave
|
||||
communications in a multi-server distributed environment. Auxiliary control
|
||||
networks have been introduced to for routing namespace hosted emulation
|
||||
|
@ -31,14 +28,14 @@ control networks, the session option should be used instead of the *core.conf*
|
|||
default.
|
||||
|
||||
> **NOTE:** If you have a large scenario with more than 253 nodes, use a control
|
||||
network prefix that allows more than the suggested */24*, such as */23* or
|
||||
greater.
|
||||
> network prefix that allows more than the suggested */24*, such as */23* or
|
||||
> greater.
|
||||
|
||||
> **NOTE:** Running a session with a control network can fail if a previous
|
||||
session has set up a control network and the its bridge is still up. Close
|
||||
the previous session first or wait for it to complete. If unable to, the
|
||||
> session has set up a control network and the its bridge is still up. Close
|
||||
> the previous session first or wait for it to complete. If unable to, the
|
||||
*core-daemon* may need to be restarted and the lingering bridge(s) removed
|
||||
manually.
|
||||
> manually.
|
||||
|
||||
```shell
|
||||
# Restart the CORE Daemon
|
||||
|
@ -54,8 +51,8 @@ done
|
|||
|
||||
> **NOTE:** If adjustments to the primary control network configuration made in
|
||||
*/etc/core/core.conf* do not seem to take affect, check if there is anything
|
||||
set in the *Session Menu*, the *Options...* dialog. They may need to be
|
||||
cleared. These per session settings override the defaults in
|
||||
> set in the *Session Menu*, the *Options...* dialog. They may need to be
|
||||
> cleared. These per session settings override the defaults in
|
||||
*/etc/core/core.conf*.
|
||||
|
||||
## Control Network in Distributed Sessions
|
||||
|
@ -102,9 +99,9 @@ argument being the keyword *"shutdown"*.
|
|||
Starting with EMANE 0.9.2, CORE will run EMANE instances within namespaces.
|
||||
Since it is advisable to separate the OTA traffic from other traffic, we will
|
||||
need more than single channel leading out from the namespace. Up to three
|
||||
auxiliary control networks may be defined. Multiple control networks are set
|
||||
up in */etc/core/core.conf* file. Lines *controlnet1*, *controlnet2* and
|
||||
*controlnet3* define the auxiliary networks.
|
||||
auxiliary control networks may be defined. Multiple control networks are set
|
||||
up in */etc/core/core.conf* file. Lines *controlnet1*, *controlnet2* and
|
||||
*controlnet3* define the auxiliary networks.
|
||||
|
||||
For example, having the following */etc/core/core.conf*:
|
||||
|
||||
|
@ -114,18 +111,18 @@ controlnet1 = core1:172.18.1.0/24 core2:172.18.2.0/24 core3:172.18.3.0/24
|
|||
controlnet2 = core1:172.19.1.0/24 core2:172.19.2.0/24 core3:172.19.3.0/24
|
||||
```
|
||||
|
||||
This will activate the primary and two auxiliary control networks and add
|
||||
This will activate the primary and two auxiliary control networks and add
|
||||
interfaces *ctrl0*, *ctrl1*, *ctrl2* to each node. One use case would be to
|
||||
assign *ctrl1* to the OTA manager device and *ctrl2* to the Event Service
|
||||
device in the EMANE Options dialog box and leave *ctrl0* for CORE control
|
||||
traffic.
|
||||
|
||||
> **NOTE:** *controlnet0* may be used in place of *controlnet* to configure
|
||||
>the primary control network.
|
||||
> the primary control network.
|
||||
|
||||
Unlike the primary control network, the auxiliary control networks will not
|
||||
employ tunneling since their primary purpose is for efficiently transporting
|
||||
multicast EMANE OTA and event traffic. Note that there is no per-session
|
||||
employ tunneling since their primary purpose is for efficiently transporting
|
||||
multicast EMANE OTA and event traffic. Note that there is no per-session
|
||||
configuration for auxiliary control networks.
|
||||
|
||||
To extend the auxiliary control networks across a distributed test
|
||||
|
@ -140,8 +137,8 @@ controlnetif3 = eth3
|
|||
```
|
||||
|
||||
> **NOTE:** There is no need to assign an interface to the primary control
|
||||
>network because tunnels are formed between the master and the slaves using IP
|
||||
>addresses that are provided in *servers.conf*.
|
||||
> network because tunnels are formed between the master and the slaves using IP
|
||||
> addresses that are provided in *servers.conf*.
|
||||
|
||||
Shown below is a representative diagram of the configuration above.
|
||||
|
||||
|
|
|
@ -1,9 +1,6 @@
|
|||
# CORE Developer's Guide
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Repository Overview
|
||||
## Overview
|
||||
|
||||
The CORE source consists of several programming languages for
|
||||
historical reasons. Current development focuses on the Python modules and
|
||||
|
@ -65,7 +62,7 @@ inv test-mock
|
|||
## Linux Network Namespace Commands
|
||||
|
||||
Linux network namespace containers are often managed using the *Linux Container Tools* or *lxc-tools* package.
|
||||
The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these
|
||||
The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these
|
||||
management utilities, but includes its own set of tools for instantiating and configuring network namespace containers.
|
||||
This section describes these tools.
|
||||
|
||||
|
@ -100,7 +97,7 @@ vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro
|
|||
|
||||
A script named *core-cleanup* is provided to clean up any running CORE emulations. It will attempt to kill any
|
||||
remaining vnoded processes, kill any EMANE processes, remove the :file:`/tmp/pycore.*` session directories, and remove
|
||||
any bridges or *nftables* rules. With a *-d* option, it will also kill any running CORE daemon.
|
||||
any bridges or *nftables* rules. With a *-d* option, it will also kill any running CORE daemon.
|
||||
|
||||
### netns command
|
||||
|
||||
|
|
|
@ -1,8 +1,5 @@
|
|||
# CORE - Distributed Emulation
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
A large emulation scenario can be deployed on multiple emulation servers and
|
||||
|
@ -61,6 +58,7 @@ First the distributed servers must be configured to allow passwordless root
|
|||
login over SSH.
|
||||
|
||||
On distributed server:
|
||||
|
||||
```shelll
|
||||
# install openssh-server
|
||||
sudo apt install openssh-server
|
||||
|
@ -81,6 +79,7 @@ sudo systemctl restart sshd
|
|||
```
|
||||
|
||||
On master server:
|
||||
|
||||
```shell
|
||||
# install package if needed
|
||||
sudo apt install openssh-client
|
||||
|
@ -99,6 +98,7 @@ connect_kwargs: {"key_filename": "/home/user/.ssh/core"}
|
|||
```
|
||||
|
||||
On distributed server:
|
||||
|
||||
```shell
|
||||
# open sshd config
|
||||
vi /etc/ssh/sshd_config
|
||||
|
@ -116,8 +116,9 @@ Make sure the value used below is the absolute path to the file
|
|||
generated above **~/.ssh/core**"
|
||||
|
||||
Add/update the fabric configuration file **/etc/fabric.yml**:
|
||||
|
||||
```yaml
|
||||
connect_kwargs: {"key_filename": "/home/user/.ssh/core"}
|
||||
connect_kwargs: { "key_filename": "/home/user/.ssh/core" }
|
||||
```
|
||||
|
||||
## Add Emulation Servers in GUI
|
||||
|
@ -183,7 +184,7 @@ These tunnels are created using GRE tunneling, similar to the Tunnel Tool.
|
|||
1. Install CORE on master server
|
||||
1. Install distributed CORE package on all servers needed
|
||||
1. Installed and configure public-key SSH access on all servers (if you want to use
|
||||
double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands
|
||||
double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands
|
||||
1. Update CORE configuration as needed
|
||||
1. Choose the servers that participate in distributed emulation.
|
||||
1. Assign nodes to desired servers, empty for master server.
|
||||
|
|
|
@ -15,7 +15,6 @@ sudo apt install docker.io
|
|||
|
||||
### RHEL Systems
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
Custom configuration required to avoid iptable rules being added and removing
|
||||
|
@ -26,8 +25,8 @@ Place the file below in **/etc/docker/docker.json**
|
|||
|
||||
```json
|
||||
{
|
||||
"bridge": "none",
|
||||
"iptables": false
|
||||
"bridge": "none",
|
||||
"iptables": false
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -53,6 +52,7 @@ Images used by Docker nodes in CORE need to have networking tools installed for
|
|||
CORE to automate setup and configuration of the network within the container.
|
||||
|
||||
Example Dockerfile:
|
||||
|
||||
```
|
||||
FROM ubuntu:latest
|
||||
RUN apt-get update
|
||||
|
@ -60,6 +60,7 @@ RUN apt-get install -y iproute2 ethtool
|
|||
```
|
||||
|
||||
Build image:
|
||||
|
||||
```shell
|
||||
sudo docker build -t <name> .
|
||||
```
|
||||
|
|
|
@ -1,7 +1,4 @@
|
|||
# CORE/EMANE
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
# EMANE (Extendable Mobile Ad-hoc Network Emulator)
|
||||
|
||||
## What is EMANE?
|
||||
|
||||
|
@ -31,7 +28,7 @@ and instantiates one EMANE process in the namespace. The EMANE process binds a
|
|||
user space socket to the TAP device for sending and receiving data from CORE.
|
||||
|
||||
An EMANE instance sends and receives OTA (Over-The-Air) traffic to and from
|
||||
other EMANE instances via a control port (e.g. *ctrl0*, *ctrl1*). It also
|
||||
other EMANE instances via a control port (e.g. *ctrl0*, *ctrl1*). It also
|
||||
sends and receives Events to and from the Event Service using the same or a
|
||||
different control port. EMANE models are configured through the GUI's
|
||||
configuration dialog. A corresponding EmaneModel Python class is sub-classed
|
||||
|
@ -93,7 +90,7 @@ want to have CORE subscribe to EMANE location events, set the following line
|
|||
in the **core.conf** configuration file.
|
||||
|
||||
> **NOTE:** Do not set this option to True if you want to manually drag nodes around
|
||||
on the canvas to update their location in EMANE.
|
||||
> on the canvas to update their location in EMANE.
|
||||
|
||||
```shell
|
||||
emane_event_monitor = True
|
||||
|
@ -104,6 +101,7 @@ prefix will place the DTD files in **/usr/local/share/emane/dtd** while CORE
|
|||
expects them in **/usr/share/emane/dtd**.
|
||||
|
||||
Update the EMANE prefix configuration to resolve this problem.
|
||||
|
||||
```shell
|
||||
emane_prefix = /usr/local
|
||||
```
|
||||
|
@ -116,6 +114,7 @@ placed within the path defined by **emane_models_dir** in the CORE
|
|||
configuration file. This path cannot end in **/emane**.
|
||||
|
||||
Here is an example model with documentation describing functionality:
|
||||
|
||||
```python
|
||||
"""
|
||||
Example custom emane model.
|
||||
|
@ -210,7 +209,7 @@ The EMANE models should be listed here for selection. (You may need to restart t
|
|||
CORE daemon if it was running prior to installing the EMANE Python bindings.)
|
||||
|
||||
When an EMANE model is selected, you can click on the models option button
|
||||
causing the GUI to query the CORE daemon for configuration items.
|
||||
causing the GUI to query the CORE daemon for configuration items.
|
||||
Each model will have different parameters, refer to the
|
||||
EMANE documentation for an explanation of each item. The defaults values are
|
||||
presented in the dialog. Clicking *Apply* and *Apply* again will store the
|
||||
|
@ -220,7 +219,7 @@ The RF-PIPE and IEEE 802.11abg models use a Universal PHY that supports
|
|||
geographic location information for determining pathloss between nodes. A
|
||||
default latitude and longitude location is provided by CORE and this
|
||||
location-based pathloss is enabled by default; this is the *pathloss mode*
|
||||
setting for the Universal PHY. Moving a node on the canvas while the
|
||||
setting for the Universal PHY. Moving a node on the canvas while the
|
||||
emulation is running generates location events for EMANE. To view or change
|
||||
the geographic location or scale of the canvas use the *Canvas Size and Scale*
|
||||
dialog available from the *Canvas* menu.
|
||||
|
@ -237,7 +236,7 @@ to be created in the virtual nodes that are linked to the EMANE WLAN. These
|
|||
devices appear with interface names such as eth0, eth1, etc. The EMANE processes
|
||||
should now be running in each namespace.
|
||||
|
||||
To view the configuration generated by CORE, look in the */tmp/pycore.nnnnn/* session
|
||||
To view the configuration generated by CORE, look in the */tmp/pycore.nnnnn/* session
|
||||
directory to find the generated EMANE xml files. One easy way to view
|
||||
this information is by double-clicking one of the virtual nodes and listing the files
|
||||
in the shell.
|
||||
|
@ -281,12 +280,11 @@ being used, along with changing any configuration setting from their defaults.
|
|||
|
||||
> **NOTE:** Here is a quick checklist for distributed emulation with EMANE.
|
||||
|
||||
1. Follow the steps outlined for normal CORE.
|
||||
2. Assign nodes to desired servers
|
||||
3. Synchronize your machine's clocks prior to starting the emulation,
|
||||
using *ntp* or *ptp*. Some EMANE models are sensitive to timing.
|
||||
4. Press the *Start* button to launch the distributed emulation.
|
||||
|
||||
1. Follow the steps outlined for normal CORE.
|
||||
2. Assign nodes to desired servers
|
||||
3. Synchronize your machine's clocks prior to starting the emulation,
|
||||
using *ntp* or *ptp*. Some EMANE models are sensitive to timing.
|
||||
4. Press the *Start* button to launch the distributed emulation.
|
||||
|
||||
Now when the Start button is used to instantiate the emulation, the local CORE
|
||||
daemon will connect to other emulation servers that have been assigned
|
||||
|
|
34
docs/grpc.md
34
docs/grpc.md
|
@ -1,7 +1,6 @@
|
|||
# gRPC API
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
## Overview
|
||||
|
||||
[gRPC](https://grpc.io/) is a client/server API for interfacing with CORE
|
||||
and used by the python GUI for driving all functionality. It is dependent
|
||||
|
@ -9,7 +8,7 @@ on having a running `core-daemon` instance to be leveraged.
|
|||
|
||||
A python client can be created from the raw generated grpc files included
|
||||
with CORE or one can leverage a provided gRPC client that helps encapsulate
|
||||
some of the functionality to try and help make things easier.
|
||||
some functionality to try and help make things easier.
|
||||
|
||||
## Python Client
|
||||
|
||||
|
@ -41,13 +40,13 @@ When creating nodes of type `NodeType.DEFAULT` these are the default models
|
|||
and the services they map to.
|
||||
|
||||
* mdr
|
||||
* zebra, OSPFv3MDR, IPForward
|
||||
* zebra, OSPFv3MDR, IPForward
|
||||
* PC
|
||||
* DefaultRoute
|
||||
* DefaultRoute
|
||||
* router
|
||||
* zebra, OSPFv2, OSPFv3, IPForward
|
||||
* zebra, OSPFv2, OSPFv3, IPForward
|
||||
* host
|
||||
* DefaultRoute, SSH
|
||||
* DefaultRoute, SSH
|
||||
|
||||
### Interface Helper
|
||||
|
||||
|
@ -56,8 +55,10 @@ when creating interface data for nodes. Alternatively one can manually create
|
|||
a `core.api.grpc.wrappers.Interface` class instead with appropriate information.
|
||||
|
||||
Manually creating gRPC client interface:
|
||||
|
||||
```python
|
||||
from core.api.grpc.wrappers import Interface
|
||||
|
||||
# id is optional and will set to the next available id
|
||||
# name is optional and will default to eth<id>
|
||||
# mac is optional and will result in a randomly generated mac
|
||||
|
@ -72,6 +73,7 @@ iface = Interface(
|
|||
```
|
||||
|
||||
Leveraging the interface helper class:
|
||||
|
||||
```python
|
||||
from core.api.grpc import client
|
||||
|
||||
|
@ -90,6 +92,7 @@ iface_data = iface_helper.create_iface(
|
|||
Various events that can occur within a session can be listened to.
|
||||
|
||||
Event types:
|
||||
|
||||
* session - events for changes in session state and mobility start/stop/pause
|
||||
* node - events for node movements and icon changes
|
||||
* link - events for link configuration changes and wireless link add/delete
|
||||
|
@ -101,9 +104,11 @@ Event types:
|
|||
from core.api.grpc import client
|
||||
from core.api.grpc.wrappers import EventType
|
||||
|
||||
|
||||
def event_listener(event):
|
||||
print(event)
|
||||
|
||||
|
||||
# create grpc client and connect
|
||||
core = client.CoreGrpcClient()
|
||||
core.connect()
|
||||
|
@ -123,6 +128,7 @@ core.events(session.id, event_listener, [EventType.NODE])
|
|||
Links can be configured at the time of creation or during runtime.
|
||||
|
||||
Currently supported configuration options:
|
||||
|
||||
* bandwidth (bps)
|
||||
* delay (us)
|
||||
* duplicate (%)
|
||||
|
@ -167,6 +173,7 @@ core.edit_link(session.id, link)
|
|||
```
|
||||
|
||||
### Peer to Peer Example
|
||||
|
||||
```python
|
||||
# required imports
|
||||
from core.api.grpc import client
|
||||
|
@ -198,6 +205,7 @@ core.start_session(session)
|
|||
```
|
||||
|
||||
### Switch/Hub Example
|
||||
|
||||
```python
|
||||
# required imports
|
||||
from core.api.grpc import client
|
||||
|
@ -232,6 +240,7 @@ core.start_session(session)
|
|||
```
|
||||
|
||||
### WLAN Example
|
||||
|
||||
```python
|
||||
# required imports
|
||||
from core.api.grpc import client
|
||||
|
@ -283,6 +292,7 @@ For EMANE you can import and use one of the existing models and
|
|||
use its name for configuration.
|
||||
|
||||
Current models:
|
||||
|
||||
* core.emane.ieee80211abg.EmaneIeee80211abgModel
|
||||
* core.emane.rfpipe.EmaneRfPipeModel
|
||||
* core.emane.tdma.EmaneTdmaModel
|
||||
|
@ -315,7 +325,7 @@ session = core.create_session()
|
|||
# create nodes
|
||||
position = Position(x=200, y=200)
|
||||
emane = session.add_node(
|
||||
1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name
|
||||
1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name
|
||||
)
|
||||
position = Position(x=100, y=100)
|
||||
node1 = session.add_node(2, model="mdr", position=position)
|
||||
|
@ -330,8 +340,8 @@ session.add_link(node1=node2, node2=emane, iface1=iface1)
|
|||
|
||||
# setting emane specific emane model configuration
|
||||
emane.set_emane_model(EmaneIeee80211abgModel.name, {
|
||||
"eventservicettl": "2",
|
||||
"unicastrate": "3",
|
||||
"eventservicettl": "2",
|
||||
"unicastrate": "3",
|
||||
})
|
||||
|
||||
# start session
|
||||
|
@ -339,6 +349,7 @@ core.start_session(session)
|
|||
```
|
||||
|
||||
EMANE Model Configuration:
|
||||
|
||||
```python
|
||||
# emane network specific config, set on an emane node
|
||||
# this setting applies to all nodes connected
|
||||
|
@ -359,6 +370,7 @@ Configuring the files of a service results in a specific hard coded script being
|
|||
generated, instead of the default scripts, that may leverage dynamic generation.
|
||||
|
||||
The following features can be configured for a service:
|
||||
|
||||
* files - files that will be generated
|
||||
* directories - directories that will be mounted unique to the node
|
||||
* startup - commands to run start a service
|
||||
|
@ -366,6 +378,7 @@ The following features can be configured for a service:
|
|||
* shutdown - commands to run to stop a service
|
||||
|
||||
Editing service properties:
|
||||
|
||||
```python
|
||||
# configure a service, for a node, for a given session
|
||||
node.service_configs[service_name] = NodeServiceData(
|
||||
|
@ -381,6 +394,7 @@ When editing a service file, it must be the name of `config`
|
|||
file that the service will generate.
|
||||
|
||||
Editing a service file:
|
||||
|
||||
```python
|
||||
# to edit the contents of a generated file you can specify
|
||||
# the service, the file name, and its contents
|
||||
|
|
88
docs/gui.md
88
docs/gui.md
|
@ -1,9 +1,5 @@
|
|||
|
||||
# CORE GUI
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
![](static/core-gui.png)
|
||||
|
||||
## Overview
|
||||
|
@ -12,7 +8,7 @@ The GUI is used to draw nodes and network devices on a canvas, linking them
|
|||
together to create an emulated network session.
|
||||
|
||||
After pressing the start button, CORE will proceed through these phases,
|
||||
staying in the **runtime** phase. After the session is stopped, CORE will
|
||||
staying in the **runtime** phase. After the session is stopped, CORE will
|
||||
proceed to the **data collection** phase before tearing down the emulated
|
||||
state.
|
||||
|
||||
|
@ -22,7 +18,7 @@ when these session states are reached.
|
|||
|
||||
## Prerequisites
|
||||
|
||||
Beyond installing CORE, you must have the CORE daemon running. This is done
|
||||
Beyond installing CORE, you must have the CORE daemon running. This is done
|
||||
on the command line with either systemd or sysv.
|
||||
|
||||
```shell
|
||||
|
@ -40,24 +36,24 @@ The GUI will create a directory in your home directory on first run called
|
|||
~/.coregui. This directory will help layout various files that the GUI may use.
|
||||
|
||||
* .coregui/
|
||||
* backgrounds/
|
||||
* place backgrounds used for display in the GUI
|
||||
* custom_emane/
|
||||
* place to keep custom emane models to use with the core-daemon
|
||||
* custom_services/
|
||||
* place to keep custom services to use with the core-daemon
|
||||
* icons/
|
||||
* icons the GUI uses along with customs icons desired
|
||||
* mobility/
|
||||
* place to keep custom mobility files
|
||||
* scripts/
|
||||
* place to keep core related scripts
|
||||
* xmls/
|
||||
* place to keep saved session xml files
|
||||
* gui.log
|
||||
* log file when running the gui, look here when issues occur for exceptions etc
|
||||
* config.yaml
|
||||
* configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc)
|
||||
* backgrounds/
|
||||
* place backgrounds used for display in the GUI
|
||||
* custom_emane/
|
||||
* place to keep custom emane models to use with the core-daemon
|
||||
* custom_services/
|
||||
* place to keep custom services to use with the core-daemon
|
||||
* icons/
|
||||
* icons the GUI uses along with customs icons desired
|
||||
* mobility/
|
||||
* place to keep custom mobility files
|
||||
* scripts/
|
||||
* place to keep core related scripts
|
||||
* xmls/
|
||||
* place to keep saved session xml files
|
||||
* gui.log
|
||||
* log file when running the gui, look here when issues occur for exceptions etc
|
||||
* config.yaml
|
||||
* configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc)
|
||||
|
||||
## Modes of Operation
|
||||
|
||||
|
@ -342,8 +338,8 @@ be selected by double-clicking its name in the list, or an interface name may
|
|||
be entered into the text box.
|
||||
|
||||
> **NOTE:** When you press the Start button to instantiate your topology, the
|
||||
interface assigned to the RJ45 will be connected to the CORE topology. The
|
||||
interface can no longer be used by the system.
|
||||
> interface assigned to the RJ45 will be connected to the CORE topology. The
|
||||
> interface can no longer be used by the system.
|
||||
|
||||
Multiple RJ45 nodes can be used within CORE and assigned to the same physical
|
||||
interface if 802.1x VLANs are used. This allows for more RJ45 nodes than
|
||||
|
@ -378,10 +374,10 @@ address of the tunnel peer. This is the IP address of the other CORE machine or
|
|||
physical machine, not an IP address of another virtual node.
|
||||
|
||||
> **NOTE:** Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices. The *gretap* device
|
||||
has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
|
||||
bridge's MTU
|
||||
becomes 1,458 bytes. The Linux bridge will not perform fragmentation for
|
||||
large packets if other bridge ports have a higher MTU such as 1,500 bytes.
|
||||
> has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
|
||||
> bridge's MTU
|
||||
> becomes 1,458 bytes. The Linux bridge will not perform fragmentation for
|
||||
> large packets if other bridge ports have a higher MTU such as 1,500 bytes.
|
||||
|
||||
The GRE key is used to identify flows with GRE tunneling. This allows multiple
|
||||
GRE tunnels to exist between that same pair of tunnel peers. A unique number
|
||||
|
@ -392,7 +388,7 @@ used.
|
|||
Here are example commands for building the other end of a tunnel on a Linux
|
||||
machine. In this example, a router in CORE has the virtual address
|
||||
**10.0.0.1/24** and the CORE host machine has the (real) address
|
||||
**198.51.100.34/24**. The Linux box
|
||||
**198.51.100.34/24**. The Linux box
|
||||
that will connect with the CORE machine is reachable over the (real) network
|
||||
at **198.51.100.76/24**.
|
||||
The emulated router is linked with the Tunnel Node. In the
|
||||
|
@ -501,21 +497,20 @@ CORE offers several levels of wireless emulation fidelity, depending on modeling
|
|||
hardware.
|
||||
|
||||
* WLAN Node
|
||||
* uses set bandwidth, delay, and loss
|
||||
* links are enabled or disabled based on a set range
|
||||
* uses the least CPU when moving, but nothing extra when not moving
|
||||
* uses set bandwidth, delay, and loss
|
||||
* links are enabled or disabled based on a set range
|
||||
* uses the least CPU when moving, but nothing extra when not moving
|
||||
* Wireless Node
|
||||
* uses set bandwidth, delay, and initial loss
|
||||
* loss dynamically changes based on distance between nodes, which can be configured with range parameters
|
||||
* links are enabled or disabled based on a set range
|
||||
* uses more CPU to calculate loss for every movement, but nothing extra when not moving
|
||||
* uses set bandwidth, delay, and initial loss
|
||||
* loss dynamically changes based on distance between nodes, which can be configured with range parameters
|
||||
* links are enabled or disabled based on a set range
|
||||
* uses more CPU to calculate loss for every movement, but nothing extra when not moving
|
||||
* EMANE Node
|
||||
* uses a physical layer model to account for signal propagation, antenna profile effects and interference
|
||||
sources in order to provide a realistic environment for wireless experimentation
|
||||
* uses the most CPU for every packet, as complex calculations are used for fidelity
|
||||
* See [Wiki](https://github.com/adjacentlink/emane/wiki) for details on general EMANE usage
|
||||
* See [CORE EMANE](emane.md) for details on using EMANE in CORE
|
||||
|
||||
* uses a physical layer model to account for signal propagation, antenna profile effects and interference
|
||||
sources in order to provide a realistic environment for wireless experimentation
|
||||
* uses the most CPU for every packet, as complex calculations are used for fidelity
|
||||
* See [Wiki](https://github.com/adjacentlink/emane/wiki) for details on general EMANE usage
|
||||
* See [CORE EMANE](emane.md) for details on using EMANE in CORE
|
||||
|
||||
| Model | Type | Supported Platform(s) | Fidelity | Description |
|
||||
|----------|--------|-----------------------|----------|-------------------------------------------------------------------------------|
|
||||
|
@ -545,7 +540,7 @@ The default configuration of the WLAN is set to use the basic range model. Havin
|
|||
selected causes **core-daemon** to calculate the distance between nodes based
|
||||
on screen pixels. A numeric range in screen pixels is set for the wireless
|
||||
network using the **Range** slider. When two wireless nodes are within range of
|
||||
each other, a green line is drawn between them and they are linked. Two
|
||||
each other, a green line is drawn between them and they are linked. Two
|
||||
wireless nodes that are farther than the range pixels apart are not linked.
|
||||
During Execute mode, users may move wireless nodes around by clicking and
|
||||
dragging them, and wireless links will be dynamically made or broken.
|
||||
|
@ -561,7 +556,8 @@ CORE has a few ways to script mobility.
|
|||
| EMANE events | See [EMANE](emane.md) for details on using EMANE scripts to move nodes around. Location information is typically given as latitude, longitude, and altitude. |
|
||||
|
||||
For the first method, you can create a mobility script using a text
|
||||
editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate the script with one of the wireless
|
||||
editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate
|
||||
the script with one of the wireless
|
||||
using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
|
||||
button, and set the *mobility script file* field in the resulting *ns2script*
|
||||
configuration dialog.
|
||||
|
|
|
@ -15,21 +15,3 @@ networking scenarios, security studies, and increasing the size of physical test
|
|||
* Runs applications and protocols without modification
|
||||
* Drag and drop GUI
|
||||
* Highly customizable
|
||||
|
||||
## Topics
|
||||
|
||||
| Topic | Description |
|
||||
|--------------------------------------|-------------------------------------------------------------------|
|
||||
| [Installation](install.md) | How to install CORE and its requirements |
|
||||
| [Architecture](architecture.md) | Overview of the architecture |
|
||||
| [Node Types](nodetypes.md) | Overview of node types supported within CORE |
|
||||
| [GUI](gui.md) | How to use the GUI |
|
||||
| [Python API](python.md) | Covers how to control core directly using python |
|
||||
| [gRPC API](grpc.md) | Covers how control core using gRPC |
|
||||
| [Distributed](distributed.md) | Details for running CORE across multiple servers |
|
||||
| [Control Network](ctrlnet.md) | How to use control networks to communicate with nodes from host |
|
||||
| [Config Services](configservices.md) | Overview of provided config services and creating custom ones |
|
||||
| [Services](services.md) | Overview of provided services and creating custom ones |
|
||||
| [EMANE](emane.md) | Overview of EMANE integration and integrating custom EMANE models |
|
||||
| [Performance](performance.md) | Notes on performance when using CORE |
|
||||
| [Developers Guide](devguide.md) | Overview on how to contribute to CORE |
|
||||
|
|
|
@ -1,10 +1,9 @@
|
|||
# Installation
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
> **WARNING:** if Docker is installed, the default iptable rules will block CORE traffic
|
||||
|
||||
## Overview
|
||||
|
||||
CORE currently supports and provides the following install options, with the package
|
||||
option being preferred.
|
||||
|
||||
|
@ -13,6 +12,7 @@ option being preferred.
|
|||
* [Dockerfile based install](#dockerfile-based-install)
|
||||
|
||||
### Requirements
|
||||
|
||||
Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous
|
||||
containers, as a general rule you should select a machine having as much RAM and CPU resources as possible.
|
||||
|
||||
|
@ -21,31 +21,35 @@ containers, as a general rule you should select a machine having as much RAM and
|
|||
* nftables compatible kernel and nft command line tool
|
||||
|
||||
### Supported Linux Distributions
|
||||
|
||||
Plan is to support recent Ubuntu and CentOS LTS releases.
|
||||
|
||||
Verified:
|
||||
|
||||
* Ubuntu - 18.04, 20.04, 22.04
|
||||
* CentOS - 7.8
|
||||
|
||||
### Files
|
||||
|
||||
The following is a list of files that would be installed after installation.
|
||||
|
||||
* executables
|
||||
* `<prefix>/bin/{vcmd, vnode}`
|
||||
* can be adjusted using script based install , package will be /usr
|
||||
* `<prefix>/bin/{vcmd, vnode}`
|
||||
* can be adjusted using script based install , package will be /usr
|
||||
* python files
|
||||
* virtual environment `/opt/core/venv`
|
||||
* local install will be local to the python version used
|
||||
* `python3 -c "import core; print(core.__file__)"`
|
||||
* scripts {core-daemon, core-cleanup, etc}
|
||||
* virtualenv `/opt/core/venv/bin`
|
||||
* local `/usr/local/bin`
|
||||
* virtual environment `/opt/core/venv`
|
||||
* local install will be local to the python version used
|
||||
* `python3 -c "import core; print(core.__file__)"`
|
||||
* scripts {core-daemon, core-cleanup, etc}
|
||||
* virtualenv `/opt/core/venv/bin`
|
||||
* local `/usr/local/bin`
|
||||
* configuration files
|
||||
* `/etc/core/{core.conf, logging.conf}`
|
||||
* `/etc/core/{core.conf, logging.conf}`
|
||||
* ospf mdr repository files when using script based install
|
||||
* `<repo>/../ospf-mdr`
|
||||
* `<repo>/../ospf-mdr`
|
||||
|
||||
### Installed Scripts
|
||||
|
||||
The following python scripts are provided.
|
||||
|
||||
| Name | Description |
|
||||
|
@ -59,17 +63,20 @@ The following python scripts are provided.
|
|||
| core-service-update | tool to update automate modifying a legacy service to match current naming |
|
||||
|
||||
### Upgrading from Older Release
|
||||
|
||||
Please make sure to uninstall any previous installations of CORE cleanly
|
||||
before proceeding to install.
|
||||
|
||||
Clearing out a current install from 7.0.0+, making sure to provide options
|
||||
used for install (`-l` or `-p`).
|
||||
|
||||
```shell
|
||||
cd <CORE_REPO>
|
||||
inv uninstall <options>
|
||||
```
|
||||
|
||||
Previous install was built from source for CORE release older than 7.0.0:
|
||||
|
||||
```shell
|
||||
cd <CORE_REPO>
|
||||
sudo make uninstall
|
||||
|
@ -78,6 +85,7 @@ make clean
|
|||
```
|
||||
|
||||
Installed from previously built packages:
|
||||
|
||||
```shell
|
||||
# centos
|
||||
sudo yum remove core
|
||||
|
@ -107,6 +115,7 @@ is ran when uninstalling and would require the same options as given, during the
|
|||
> tk compatibility for python gui, and venv for virtual environments
|
||||
|
||||
Examples for install:
|
||||
|
||||
```shell
|
||||
# recommended to upgrade to the latest version of pip before installation
|
||||
# in python, can help avoid building from source issues
|
||||
|
@ -128,6 +137,7 @@ sudo <python> -m pip install /opt/core/core-<version>-py3-none-any.whl
|
|||
```
|
||||
|
||||
Example for removal, requires using the same options as install:
|
||||
|
||||
```shell
|
||||
# remove a standard install
|
||||
sudo <yum/apt> remove core
|
||||
|
@ -142,6 +152,7 @@ sudo NO_PYTHON=1 <yum/apt> remove core
|
|||
```
|
||||
|
||||
### Installing OSPF MDR
|
||||
|
||||
You will need to manually install OSPF MDR for routing nodes, since this is not
|
||||
provided by the package.
|
||||
|
||||
|
@ -159,6 +170,7 @@ sudo make install
|
|||
When done see [Post Install](#post-install).
|
||||
|
||||
## Script Based Install
|
||||
|
||||
The script based installation will install system level dependencies, python library and
|
||||
dependencies, as well as dependencies for building CORE.
|
||||
|
||||
|
@ -166,6 +178,7 @@ The script based install also automatically builds and installs OSPF MDR, used b
|
|||
on routing nodes. This can optionally be skipped.
|
||||
|
||||
Installaion will carry out the following steps:
|
||||
|
||||
* installs system dependencies for building core
|
||||
* builds vcmd/vnoded and python grpc files
|
||||
* installs core into poetry managed virtual environment or locally, if flag is passed
|
||||
|
@ -188,6 +201,7 @@ The following tools will be leveraged during installation:
|
|||
| [poetry](https://python-poetry.org/) | used to install python virtual environment or building a python wheel |
|
||||
|
||||
First we will need to clone and navigate to the CORE repo.
|
||||
|
||||
```shell
|
||||
# clone CORE repo
|
||||
git clone https://github.com/coreemu/core.git
|
||||
|
@ -229,6 +243,7 @@ Options:
|
|||
When done see [Post Install](#post-install).
|
||||
|
||||
### Unsupported Linux Distribution
|
||||
|
||||
For unsupported OSs you could attempt to do the following to translate
|
||||
an installation to your use case.
|
||||
|
||||
|
@ -243,6 +258,7 @@ inv install --dry -v -p <prefix> -i <install type>
|
|||
```
|
||||
|
||||
## Dockerfile Based Install
|
||||
|
||||
You can leverage one of the provided Dockerfiles, to run and launch CORE within a Docker container.
|
||||
|
||||
Since CORE nodes will leverage software available within the system for a given use case,
|
||||
|
@ -253,7 +269,7 @@ make sure to update and build the Dockerfile with desired software.
|
|||
git clone https://github.com/coreemu/core.git
|
||||
cd core
|
||||
# build image
|
||||
sudo docker build -t core -f Dockerfile.<centos,ubuntu,oracle> .
|
||||
sudo docker build -t core -f dockerfiles/Dockerfile.<centos,ubuntu,oracle> .
|
||||
# start container
|
||||
sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged core
|
||||
# enable xhost access to the root user
|
||||
|
@ -265,6 +281,7 @@ sudo docker exec -it core core-gui
|
|||
When done see [Post Install](#post-install).
|
||||
|
||||
## Installing EMANE
|
||||
|
||||
> **NOTE:** installing EMANE for the virtual environment is known to work for 1.21+
|
||||
|
||||
The recommended way to install EMANE is using prebuilt packages, otherwise
|
||||
|
@ -282,6 +299,7 @@ Also, these EMANE bindings need to be built using `protoc` 3.19+. So make sure
|
|||
that is available and being picked up on PATH properly.
|
||||
|
||||
Examples for building and installing EMANE python bindings for use in CORE:
|
||||
|
||||
```shell
|
||||
# if your system does not have protoc 3.19+
|
||||
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
|
||||
|
@ -306,32 +324,39 @@ inv install-emane -e <version tag>
|
|||
```
|
||||
|
||||
## Post Install
|
||||
|
||||
After installation completes you are now ready to run CORE.
|
||||
|
||||
### Resolving Docker Issues
|
||||
|
||||
If you have Docker installed, by default it will change the iptables
|
||||
forwarding chain to drop packets, which will cause issues for CORE traffic.
|
||||
|
||||
You can temporarily resolve the issue with the following command:
|
||||
|
||||
```shell
|
||||
sudo iptables --policy FORWARD ACCEPT
|
||||
```
|
||||
|
||||
Alternatively, you can configure Docker to avoid doing this, but will likely
|
||||
break normal Docker networking usage. Using the setting below will require
|
||||
a restart.
|
||||
|
||||
Place the file contents below in **/etc/docker/docker.json**
|
||||
|
||||
```json
|
||||
{
|
||||
"iptables": false
|
||||
"iptables": false
|
||||
}
|
||||
```
|
||||
|
||||
### Resolving Path Issues
|
||||
|
||||
One problem running CORE you may run into, using the virtual environment or locally
|
||||
can be issues related to your path.
|
||||
|
||||
To add support for your user to run scripts from the virtual environment:
|
||||
|
||||
```shell
|
||||
# can add to ~/.bashrc
|
||||
export PATH=$PATH:/opt/core/venv/bin
|
||||
|
@ -339,6 +364,7 @@ export PATH=$PATH:/opt/core/venv/bin
|
|||
|
||||
This will not solve the path issue when running as sudo, so you can do either
|
||||
of the following to compensate.
|
||||
|
||||
```shell
|
||||
# run command passing in the right PATH to pickup from the user running the command
|
||||
sudo env PATH=$PATH core-daemon
|
||||
|
@ -350,6 +376,7 @@ sudop core-daemon
|
|||
```
|
||||
|
||||
### Running CORE
|
||||
|
||||
The following assumes I have resolved PATH issues and setup the `sudop` alias.
|
||||
|
||||
```shell
|
||||
|
@ -360,6 +387,7 @@ core-gui
|
|||
```
|
||||
|
||||
### Enabling Service
|
||||
|
||||
After installation, the core service is not enabled by default. If you desire to use the
|
||||
service, run the following commands.
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# Install CentOS
|
||||
|
||||
## Overview
|
||||
|
||||
Below is a detailed path for installing CORE and related tooling on a fresh
|
||||
CentOS 7 install. Both of the examples below will install CORE into its
|
||||
own virtual environment located at **/opt/core/venv**. Both examples below
|
||||
|
@ -122,6 +124,7 @@ The CORE virtual environment and related scripts will not be found on your PATH,
|
|||
so some adjustments needs to be made.
|
||||
|
||||
To add support for your user to run scripts from the virtual environment:
|
||||
|
||||
```shell
|
||||
# can add to ~/.bashrc
|
||||
export PATH=$PATH:/opt/core/venv/bin
|
||||
|
@ -129,6 +132,7 @@ export PATH=$PATH:/opt/core/venv/bin
|
|||
|
||||
This will not solve the path issue when running as sudo, so you can do either
|
||||
of the following to compensate.
|
||||
|
||||
```shell
|
||||
# run command passing in the right PATH to pickup from the user running the command
|
||||
sudo env PATH=$PATH core-daemon
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
# Install Ubuntu
|
||||
|
||||
## Overview
|
||||
|
||||
Below is a detailed path for installing CORE and related tooling on a fresh
|
||||
Ubuntu 22.04 install. Both of the examples below will install CORE into its
|
||||
Ubuntu 22.04 installation. Both of the examples below will install CORE into its
|
||||
own virtual environment located at **/opt/core/venv**. Both examples below
|
||||
also assume using **~/Documents** as the working directory.
|
||||
|
||||
|
@ -94,6 +96,7 @@ The CORE virtual environment and related scripts will not be found on your PATH,
|
|||
so some adjustments needs to be made.
|
||||
|
||||
To add support for your user to run scripts from the virtual environment:
|
||||
|
||||
```shell
|
||||
# can add to ~/.bashrc
|
||||
export PATH=$PATH:/opt/core/venv/bin
|
||||
|
@ -101,6 +104,7 @@ export PATH=$PATH:/opt/core/venv/bin
|
|||
|
||||
This will not solve the path issue when running as sudo, so you can do either
|
||||
of the following to compensate.
|
||||
|
||||
```shell
|
||||
# run command passing in the right PATH to pickup from the user running the command
|
||||
sudo env PATH=$PATH core-daemon
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# LXC Support
|
||||
|
||||
## Overview
|
||||
|
||||
LXC nodes are provided by way of LXD to create nodes using predefined
|
||||
images and provide file system separation.
|
||||
|
||||
|
|
|
@ -1,7 +1,4 @@
|
|||
# CORE Node Types
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
# Node Types
|
||||
|
||||
## Overview
|
||||
|
||||
|
|
|
@ -1,8 +1,5 @@
|
|||
# CORE Performance
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
The top question about the performance of CORE is often *how many nodes can it
|
||||
|
@ -16,7 +13,6 @@ handle?* The answer depends on several factors:
|
|||
| Network traffic | the more packets that are sent around the virtual network increases the amount of CPU usage. |
|
||||
| GUI usage | widgets that run periodically, mobility scenarios, and other GUI interactions generally consume CPU cycles that may be needed for emulation. |
|
||||
|
||||
|
||||
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux,
|
||||
we have found it reasonable to run 30-75 nodes running OSPFv2 and OSPFv3
|
||||
routing. On this hardware CORE can instantiate 100 or more nodes, but at
|
||||
|
@ -39,8 +35,8 @@ For a more detailed study of performance in CORE, refer to the following
|
|||
publications:
|
||||
|
||||
* J\. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE
|
||||
Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
|
||||
Network Emulators, Proceedings of the IEEE Military Communications Conference 2011, November 2011.
|
||||
* Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings
|
||||
of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
|
||||
of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
|
||||
* J\. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time
|
||||
network emulator, Proceedings of IEEE MILCOM Conference, 2008.
|
||||
network emulator, Proceedings of IEEE MILCOM Conference, 2008.
|
||||
|
|
|
@ -1,8 +1,5 @@
|
|||
# Python API
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Overview
|
||||
|
||||
Writing your own Python scripts offers a rich programming environment with
|
||||
|
|
|
@ -1,9 +1,6 @@
|
|||
# CORE Services
|
||||
# Services (Deprecated)
|
||||
|
||||
* Table of Contents
|
||||
{:toc}
|
||||
|
||||
## Services
|
||||
## Overview
|
||||
|
||||
CORE uses the concept of services to specify what processes or scripts run on a
|
||||
node when it is started. Layer-3 nodes such as routers and PCs are defined by
|
||||
|
@ -16,8 +13,8 @@ configuration files, startup index, starting commands, validation commands,
|
|||
shutdown commands, and meta-data associated with a node.
|
||||
|
||||
> **NOTE:** **Network namespace nodes do not undergo the normal Linux boot process**
|
||||
using the **init**, **upstart**, or **systemd** frameworks. These
|
||||
lightweight nodes use configured CORE *services*.
|
||||
> using the **init**, **upstart**, or **systemd** frameworks. These
|
||||
> lightweight nodes use configured CORE *services*.
|
||||
|
||||
## Available Services
|
||||
|
||||
|
@ -72,10 +69,10 @@ The dialog has three tabs for configuring the different aspects of the service:
|
|||
files, directories, and startup/shutdown.
|
||||
|
||||
> **NOTE:** A **yellow** customize icon next to a service indicates that service
|
||||
requires customization (e.g. the *Firewall* service).
|
||||
A **green** customize icon indicates that a custom configuration exists.
|
||||
Click the *Defaults* button when customizing a service to remove any
|
||||
customizations.
|
||||
> requires customization (e.g. the *Firewall* service).
|
||||
> A **green** customize icon indicates that a custom configuration exists.
|
||||
> Click the *Defaults* button when customizing a service to remove any
|
||||
> customizations.
|
||||
|
||||
The Files tab is used to display or edit the configuration files or scripts that
|
||||
are used for this service. Files can be selected from a drop-down list, and
|
||||
|
@ -91,9 +88,9 @@ the Zebra service, because Quagga running on each node needs to write separate
|
|||
PID files to that directory.
|
||||
|
||||
> **NOTE:** The **/var/log** and **/var/run** directories are
|
||||
mounted uniquely per-node by default.
|
||||
Per-node mount targets can be found in **/tmp/pycore.nnnnn/nN.conf/**
|
||||
(where *nnnnn* is the session number and *N* is the node number.)
|
||||
> mounted uniquely per-node by default.
|
||||
> Per-node mount targets can be found in **/tmp/pycore.nnnnn/nN.conf/**
|
||||
(where *nnnnn* is the session number and *N* is the node number.)
|
||||
|
||||
The Startup/shutdown tab lists commands that are used to start and stop this
|
||||
service. The startup index allows configuring when this service starts relative
|
||||
|
@ -121,7 +118,7 @@ produces a non-zero return value, an exception is generated, which will cause
|
|||
an error to be displayed in the Check Emulation Light.
|
||||
|
||||
> **NOTE:** To start, stop, and restart services during run-time, right-click a
|
||||
node and use the *Services...* menu.
|
||||
> node and use the *Services...* menu.
|
||||
|
||||
## New Services
|
||||
|
||||
|
|
56
mkdocs.yml
Normal file
56
mkdocs.yml
Normal file
|
@ -0,0 +1,56 @@
|
|||
site_name: CORE Documentation
|
||||
use_directory_urls: false
|
||||
theme:
|
||||
name: material
|
||||
palette:
|
||||
- scheme: slate
|
||||
toggle:
|
||||
icon: material/brightness-4
|
||||
name: Switch to Light Mode
|
||||
primary: teal
|
||||
accent: teal
|
||||
- scheme: default
|
||||
toggle:
|
||||
icon: material/brightness-7
|
||||
name: Switch to Dark Mode
|
||||
primary: teal
|
||||
accent: teal
|
||||
features:
|
||||
- navigation.path
|
||||
- navigation.instant
|
||||
- navigation.footer
|
||||
- content.code.copy
|
||||
markdown_extensions:
|
||||
- pymdownx.snippets:
|
||||
base_path: docs
|
||||
- admonition
|
||||
- pymdownx.details
|
||||
- pymdownx.superfences
|
||||
- pymdownx.tabbed:
|
||||
alternate_style: true
|
||||
- pymdownx.inlinehilite
|
||||
nav:
|
||||
- Home: index.md
|
||||
- Overview:
|
||||
- Architecture: architecture.md
|
||||
- Performance: performance.md
|
||||
- Installation:
|
||||
- Overview: install.md
|
||||
- Ubuntu: install_ubuntu.md
|
||||
- CentOS: install_centos.md
|
||||
- Detailed Topics:
|
||||
- GUI: gui.md
|
||||
- API:
|
||||
- Python: python.md
|
||||
- gRPC: grpc.md
|
||||
- Services:
|
||||
- Config Services: configservices.md
|
||||
- Services (Deprecated): services.md
|
||||
- EMANE: emane.md
|
||||
- Node Types:
|
||||
- Overview: nodetypes.md
|
||||
- Docker: docker.md
|
||||
- LXC: lxc.md
|
||||
- Distributed: distributed.md
|
||||
- Control Network: ctrlnet.md
|
||||
- Developers Guide: devguide.md
|
Loading…
Reference in a new issue