docs: initial changes to support using mkdocs material

This commit is contained in:
Blake Harnden 2023-03-07 21:49:50 -08:00
parent 785cf82ba3
commit 078e0df329
20 changed files with 253 additions and 187 deletions

3
.gitignore vendored
View file

@ -18,6 +18,9 @@ configure~
debian
stamp-h1
# python virtual environments
venv
# generated protobuf files
*_pb2.py
*_pb2_grpc.py

View file

@ -1,8 +1,5 @@
# CORE Architecture
* Table of Contents
{:toc}
## Main Components
* core-daemon

View file

@ -1,7 +1,4 @@
# CORE Config Services
* Table of Contents
{:toc}
# Config Services
## Overview
@ -15,6 +12,7 @@ CORE services are a convenience for creating reusable dynamic scripts
to run on nodes, for carrying out specific task(s).
This boilds down to the following functions:
* generating files the service will use, either directly for commands or for configuration
* command(s) for starting a service
* command(s) for validating a service
@ -121,6 +119,7 @@ from typing import Dict, List
from core.config import ConfigString, ConfigBool, Configuration
from core.configservice.base import ConfigService, ConfigServiceMode, ShadowDir
# class that subclasses ConfigService
class ExampleService(ConfigService):
# unique name for your service within CORE

View file

@ -1,8 +1,5 @@
# CORE Control Network
* Table of Contents
{:toc}
## Overview
The CORE control network allows the virtual nodes to communicate with their
@ -31,14 +28,14 @@ control networks, the session option should be used instead of the *core.conf*
default.
> **NOTE:** If you have a large scenario with more than 253 nodes, use a control
network prefix that allows more than the suggested */24*, such as */23* or
greater.
> network prefix that allows more than the suggested */24*, such as */23* or
> greater.
> **NOTE:** Running a session with a control network can fail if a previous
session has set up a control network and the its bridge is still up. Close
the previous session first or wait for it to complete. If unable to, the
> session has set up a control network and the its bridge is still up. Close
> the previous session first or wait for it to complete. If unable to, the
*core-daemon* may need to be restarted and the lingering bridge(s) removed
manually.
> manually.
```shell
# Restart the CORE Daemon
@ -54,8 +51,8 @@ done
> **NOTE:** If adjustments to the primary control network configuration made in
*/etc/core/core.conf* do not seem to take affect, check if there is anything
set in the *Session Menu*, the *Options...* dialog. They may need to be
cleared. These per session settings override the defaults in
> set in the *Session Menu*, the *Options...* dialog. They may need to be
> cleared. These per session settings override the defaults in
*/etc/core/core.conf*.
## Control Network in Distributed Sessions

View file

@ -1,9 +1,6 @@
# CORE Developer's Guide
* Table of Contents
{:toc}
## Repository Overview
## Overview
The CORE source consists of several programming languages for
historical reasons. Current development focuses on the Python modules and

View file

@ -1,8 +1,5 @@
# CORE - Distributed Emulation
* Table of Contents
{:toc}
## Overview
A large emulation scenario can be deployed on multiple emulation servers and
@ -61,6 +58,7 @@ First the distributed servers must be configured to allow passwordless root
login over SSH.
On distributed server:
```shelll
# install openssh-server
sudo apt install openssh-server
@ -81,6 +79,7 @@ sudo systemctl restart sshd
```
On master server:
```shell
# install package if needed
sudo apt install openssh-client
@ -99,6 +98,7 @@ connect_kwargs: {"key_filename": "/home/user/.ssh/core"}
```
On distributed server:
```shell
# open sshd config
vi /etc/ssh/sshd_config
@ -116,6 +116,7 @@ Make sure the value used below is the absolute path to the file
generated above **~/.ssh/core**"
Add/update the fabric configuration file **/etc/fabric.yml**:
```yaml
connect_kwargs: { "key_filename": "/home/user/.ssh/core" }
```

View file

@ -15,7 +15,6 @@ sudo apt install docker.io
### RHEL Systems
## Configuration
Custom configuration required to avoid iptable rules being added and removing
@ -53,6 +52,7 @@ Images used by Docker nodes in CORE need to have networking tools installed for
CORE to automate setup and configuration of the network within the container.
Example Dockerfile:
```
FROM ubuntu:latest
RUN apt-get update
@ -60,6 +60,7 @@ RUN apt-get install -y iproute2 ethtool
```
Build image:
```shell
sudo docker build -t <name> .
```

View file

@ -1,7 +1,4 @@
# CORE/EMANE
* Table of Contents
{:toc}
# EMANE (Extendable Mobile Ad-hoc Network Emulator)
## What is EMANE?
@ -93,7 +90,7 @@ want to have CORE subscribe to EMANE location events, set the following line
in the **core.conf** configuration file.
> **NOTE:** Do not set this option to True if you want to manually drag nodes around
on the canvas to update their location in EMANE.
> on the canvas to update their location in EMANE.
```shell
emane_event_monitor = True
@ -104,6 +101,7 @@ prefix will place the DTD files in **/usr/local/share/emane/dtd** while CORE
expects them in **/usr/share/emane/dtd**.
Update the EMANE prefix configuration to resolve this problem.
```shell
emane_prefix = /usr/local
```
@ -116,6 +114,7 @@ placed within the path defined by **emane_models_dir** in the CORE
configuration file. This path cannot end in **/emane**.
Here is an example model with documentation describing functionality:
```python
"""
Example custom emane model.
@ -287,7 +286,6 @@ being used, along with changing any configuration setting from their defaults.
using *ntp* or *ptp*. Some EMANE models are sensitive to timing.
4. Press the *Start* button to launch the distributed emulation.
Now when the Start button is used to instantiate the emulation, the local CORE
daemon will connect to other emulation servers that have been assigned
to nodes. Each server will have its own session directory where the

View file

@ -1,7 +1,6 @@
# gRPC API
* Table of Contents
{:toc}
## Overview
[gRPC](https://grpc.io/) is a client/server API for interfacing with CORE
and used by the python GUI for driving all functionality. It is dependent
@ -9,7 +8,7 @@ on having a running `core-daemon` instance to be leveraged.
A python client can be created from the raw generated grpc files included
with CORE or one can leverage a provided gRPC client that helps encapsulate
some of the functionality to try and help make things easier.
some functionality to try and help make things easier.
## Python Client
@ -56,8 +55,10 @@ when creating interface data for nodes. Alternatively one can manually create
a `core.api.grpc.wrappers.Interface` class instead with appropriate information.
Manually creating gRPC client interface:
```python
from core.api.grpc.wrappers import Interface
# id is optional and will set to the next available id
# name is optional and will default to eth<id>
# mac is optional and will result in a randomly generated mac
@ -72,6 +73,7 @@ iface = Interface(
```
Leveraging the interface helper class:
```python
from core.api.grpc import client
@ -90,6 +92,7 @@ iface_data = iface_helper.create_iface(
Various events that can occur within a session can be listened to.
Event types:
* session - events for changes in session state and mobility start/stop/pause
* node - events for node movements and icon changes
* link - events for link configuration changes and wireless link add/delete
@ -101,9 +104,11 @@ Event types:
from core.api.grpc import client
from core.api.grpc.wrappers import EventType
def event_listener(event):
print(event)
# create grpc client and connect
core = client.CoreGrpcClient()
core.connect()
@ -123,6 +128,7 @@ core.events(session.id, event_listener, [EventType.NODE])
Links can be configured at the time of creation or during runtime.
Currently supported configuration options:
* bandwidth (bps)
* delay (us)
* duplicate (%)
@ -167,6 +173,7 @@ core.edit_link(session.id, link)
```
### Peer to Peer Example
```python
# required imports
from core.api.grpc import client
@ -198,6 +205,7 @@ core.start_session(session)
```
### Switch/Hub Example
```python
# required imports
from core.api.grpc import client
@ -232,6 +240,7 @@ core.start_session(session)
```
### WLAN Example
```python
# required imports
from core.api.grpc import client
@ -283,6 +292,7 @@ For EMANE you can import and use one of the existing models and
use its name for configuration.
Current models:
* core.emane.ieee80211abg.EmaneIeee80211abgModel
* core.emane.rfpipe.EmaneRfPipeModel
* core.emane.tdma.EmaneTdmaModel
@ -339,6 +349,7 @@ core.start_session(session)
```
EMANE Model Configuration:
```python
# emane network specific config, set on an emane node
# this setting applies to all nodes connected
@ -359,6 +370,7 @@ Configuring the files of a service results in a specific hard coded script being
generated, instead of the default scripts, that may leverage dynamic generation.
The following features can be configured for a service:
* files - files that will be generated
* directories - directories that will be mounted unique to the node
* startup - commands to run start a service
@ -366,6 +378,7 @@ The following features can be configured for a service:
* shutdown - commands to run to stop a service
Editing service properties:
```python
# configure a service, for a node, for a given session
node.service_configs[service_name] = NodeServiceData(
@ -381,6 +394,7 @@ When editing a service file, it must be the name of `config`
file that the service will generate.
Editing a service file:
```python
# to edit the contents of a generated file you can specify
# the service, the file name, and its contents

View file

@ -1,9 +1,5 @@
# CORE GUI
* Table of Contents
{:toc}
![](static/core-gui.png)
## Overview
@ -342,8 +338,8 @@ be selected by double-clicking its name in the list, or an interface name may
be entered into the text box.
> **NOTE:** When you press the Start button to instantiate your topology, the
interface assigned to the RJ45 will be connected to the CORE topology. The
interface can no longer be used by the system.
> interface assigned to the RJ45 will be connected to the CORE topology. The
> interface can no longer be used by the system.
Multiple RJ45 nodes can be used within CORE and assigned to the same physical
interface if 802.1x VLANs are used. This allows for more RJ45 nodes than
@ -378,10 +374,10 @@ address of the tunnel peer. This is the IP address of the other CORE machine or
physical machine, not an IP address of another virtual node.
> **NOTE:** Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices. The *gretap* device
has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
bridge's MTU
becomes 1,458 bytes. The Linux bridge will not perform fragmentation for
large packets if other bridge ports have a higher MTU such as 1,500 bytes.
> has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the
> bridge's MTU
> becomes 1,458 bytes. The Linux bridge will not perform fragmentation for
> large packets if other bridge ports have a higher MTU such as 1,500 bytes.
The GRE key is used to identify flows with GRE tunneling. This allows multiple
GRE tunnels to exist between that same pair of tunnel peers. A unique number
@ -516,7 +512,6 @@ hardware.
* See [Wiki](https://github.com/adjacentlink/emane/wiki) for details on general EMANE usage
* See [CORE EMANE](emane.md) for details on using EMANE in CORE
| Model | Type | Supported Platform(s) | Fidelity | Description |
|----------|--------|-----------------------|----------|-------------------------------------------------------------------------------|
| WLAN | On/Off | Linux | Low | Ethernet bridging with nftables |
@ -561,7 +556,8 @@ CORE has a few ways to script mobility.
| EMANE events | See [EMANE](emane.md) for details on using EMANE scripts to move nodes around. Location information is typically given as latitude, longitude, and altitude. |
For the first method, you can create a mobility script using a text
editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate the script with one of the wireless
editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate
the script with one of the wireless
using the WLAN configuration dialog box. Click the *ns-2 mobility script...*
button, and set the *mobility script file* field in the resulting *ns2script*
configuration dialog.

View file

@ -15,21 +15,3 @@ networking scenarios, security studies, and increasing the size of physical test
* Runs applications and protocols without modification
* Drag and drop GUI
* Highly customizable
## Topics
| Topic | Description |
|--------------------------------------|-------------------------------------------------------------------|
| [Installation](install.md) | How to install CORE and its requirements |
| [Architecture](architecture.md) | Overview of the architecture |
| [Node Types](nodetypes.md) | Overview of node types supported within CORE |
| [GUI](gui.md) | How to use the GUI |
| [Python API](python.md) | Covers how to control core directly using python |
| [gRPC API](grpc.md) | Covers how control core using gRPC |
| [Distributed](distributed.md) | Details for running CORE across multiple servers |
| [Control Network](ctrlnet.md) | How to use control networks to communicate with nodes from host |
| [Config Services](configservices.md) | Overview of provided config services and creating custom ones |
| [Services](services.md) | Overview of provided services and creating custom ones |
| [EMANE](emane.md) | Overview of EMANE integration and integrating custom EMANE models |
| [Performance](performance.md) | Notes on performance when using CORE |
| [Developers Guide](devguide.md) | Overview on how to contribute to CORE |

View file

@ -1,10 +1,9 @@
# Installation
* Table of Contents
{:toc}
> **WARNING:** if Docker is installed, the default iptable rules will block CORE traffic
## Overview
CORE currently supports and provides the following install options, with the package
option being preferred.
@ -13,6 +12,7 @@ option being preferred.
* [Dockerfile based install](#dockerfile-based-install)
### Requirements
Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous
containers, as a general rule you should select a machine having as much RAM and CPU resources as possible.
@ -21,13 +21,16 @@ containers, as a general rule you should select a machine having as much RAM and
* nftables compatible kernel and nft command line tool
### Supported Linux Distributions
Plan is to support recent Ubuntu and CentOS LTS releases.
Verified:
* Ubuntu - 18.04, 20.04, 22.04
* CentOS - 7.8
### Files
The following is a list of files that would be installed after installation.
* executables
@ -46,6 +49,7 @@ The following is a list of files that would be installed after installation.
* `<repo>/../ospf-mdr`
### Installed Scripts
The following python scripts are provided.
| Name | Description |
@ -59,17 +63,20 @@ The following python scripts are provided.
| core-service-update | tool to update automate modifying a legacy service to match current naming |
### Upgrading from Older Release
Please make sure to uninstall any previous installations of CORE cleanly
before proceeding to install.
Clearing out a current install from 7.0.0+, making sure to provide options
used for install (`-l` or `-p`).
```shell
cd <CORE_REPO>
inv uninstall <options>
```
Previous install was built from source for CORE release older than 7.0.0:
```shell
cd <CORE_REPO>
sudo make uninstall
@ -78,6 +85,7 @@ make clean
```
Installed from previously built packages:
```shell
# centos
sudo yum remove core
@ -107,6 +115,7 @@ is ran when uninstalling and would require the same options as given, during the
> tk compatibility for python gui, and venv for virtual environments
Examples for install:
```shell
# recommended to upgrade to the latest version of pip before installation
# in python, can help avoid building from source issues
@ -128,6 +137,7 @@ sudo <python> -m pip install /opt/core/core-<version>-py3-none-any.whl
```
Example for removal, requires using the same options as install:
```shell
# remove a standard install
sudo <yum/apt> remove core
@ -142,6 +152,7 @@ sudo NO_PYTHON=1 <yum/apt> remove core
```
### Installing OSPF MDR
You will need to manually install OSPF MDR for routing nodes, since this is not
provided by the package.
@ -159,6 +170,7 @@ sudo make install
When done see [Post Install](#post-install).
## Script Based Install
The script based installation will install system level dependencies, python library and
dependencies, as well as dependencies for building CORE.
@ -166,6 +178,7 @@ The script based install also automatically builds and installs OSPF MDR, used b
on routing nodes. This can optionally be skipped.
Installaion will carry out the following steps:
* installs system dependencies for building core
* builds vcmd/vnoded and python grpc files
* installs core into poetry managed virtual environment or locally, if flag is passed
@ -188,6 +201,7 @@ The following tools will be leveraged during installation:
| [poetry](https://python-poetry.org/) | used to install python virtual environment or building a python wheel |
First we will need to clone and navigate to the CORE repo.
```shell
# clone CORE repo
git clone https://github.com/coreemu/core.git
@ -229,6 +243,7 @@ Options:
When done see [Post Install](#post-install).
### Unsupported Linux Distribution
For unsupported OSs you could attempt to do the following to translate
an installation to your use case.
@ -243,6 +258,7 @@ inv install --dry -v -p <prefix> -i <install type>
```
## Dockerfile Based Install
You can leverage one of the provided Dockerfiles, to run and launch CORE within a Docker container.
Since CORE nodes will leverage software available within the system for a given use case,
@ -253,7 +269,7 @@ make sure to update and build the Dockerfile with desired software.
git clone https://github.com/coreemu/core.git
cd core
# build image
sudo docker build -t core -f Dockerfile.<centos,ubuntu,oracle> .
sudo docker build -t core -f dockerfiles/Dockerfile.<centos,ubuntu,oracle> .
# start container
sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged core
# enable xhost access to the root user
@ -265,6 +281,7 @@ sudo docker exec -it core core-gui
When done see [Post Install](#post-install).
## Installing EMANE
> **NOTE:** installing EMANE for the virtual environment is known to work for 1.21+
The recommended way to install EMANE is using prebuilt packages, otherwise
@ -282,6 +299,7 @@ Also, these EMANE bindings need to be built using `protoc` 3.19+. So make sure
that is available and being picked up on PATH properly.
Examples for building and installing EMANE python bindings for use in CORE:
```shell
# if your system does not have protoc 3.19+
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.19.6/protoc-3.19.6-linux-x86_64.zip
@ -306,21 +324,26 @@ inv install-emane -e <version tag>
```
## Post Install
After installation completes you are now ready to run CORE.
### Resolving Docker Issues
If you have Docker installed, by default it will change the iptables
forwarding chain to drop packets, which will cause issues for CORE traffic.
You can temporarily resolve the issue with the following command:
```shell
sudo iptables --policy FORWARD ACCEPT
```
Alternatively, you can configure Docker to avoid doing this, but will likely
break normal Docker networking usage. Using the setting below will require
a restart.
Place the file contents below in **/etc/docker/docker.json**
```json
{
"iptables": false
@ -328,10 +351,12 @@ Place the file contents below in **/etc/docker/docker.json**
```
### Resolving Path Issues
One problem running CORE you may run into, using the virtual environment or locally
can be issues related to your path.
To add support for your user to run scripts from the virtual environment:
```shell
# can add to ~/.bashrc
export PATH=$PATH:/opt/core/venv/bin
@ -339,6 +364,7 @@ export PATH=$PATH:/opt/core/venv/bin
This will not solve the path issue when running as sudo, so you can do either
of the following to compensate.
```shell
# run command passing in the right PATH to pickup from the user running the command
sudo env PATH=$PATH core-daemon
@ -350,6 +376,7 @@ sudop core-daemon
```
### Running CORE
The following assumes I have resolved PATH issues and setup the `sudop` alias.
```shell
@ -360,6 +387,7 @@ core-gui
```
### Enabling Service
After installation, the core service is not enabled by default. If you desire to use the
service, run the following commands.

View file

@ -1,5 +1,7 @@
# Install CentOS
## Overview
Below is a detailed path for installing CORE and related tooling on a fresh
CentOS 7 install. Both of the examples below will install CORE into its
own virtual environment located at **/opt/core/venv**. Both examples below
@ -122,6 +124,7 @@ The CORE virtual environment and related scripts will not be found on your PATH,
so some adjustments needs to be made.
To add support for your user to run scripts from the virtual environment:
```shell
# can add to ~/.bashrc
export PATH=$PATH:/opt/core/venv/bin
@ -129,6 +132,7 @@ export PATH=$PATH:/opt/core/venv/bin
This will not solve the path issue when running as sudo, so you can do either
of the following to compensate.
```shell
# run command passing in the right PATH to pickup from the user running the command
sudo env PATH=$PATH core-daemon

View file

@ -1,7 +1,9 @@
# Install Ubuntu
## Overview
Below is a detailed path for installing CORE and related tooling on a fresh
Ubuntu 22.04 install. Both of the examples below will install CORE into its
Ubuntu 22.04 installation. Both of the examples below will install CORE into its
own virtual environment located at **/opt/core/venv**. Both examples below
also assume using **~/Documents** as the working directory.
@ -94,6 +96,7 @@ The CORE virtual environment and related scripts will not be found on your PATH,
so some adjustments needs to be made.
To add support for your user to run scripts from the virtual environment:
```shell
# can add to ~/.bashrc
export PATH=$PATH:/opt/core/venv/bin
@ -101,6 +104,7 @@ export PATH=$PATH:/opt/core/venv/bin
This will not solve the path issue when running as sudo, so you can do either
of the following to compensate.
```shell
# run command passing in the right PATH to pickup from the user running the command
sudo env PATH=$PATH core-daemon

View file

@ -1,5 +1,7 @@
# LXC Support
## Overview
LXC nodes are provided by way of LXD to create nodes using predefined
images and provide file system separation.

View file

@ -1,7 +1,4 @@
# CORE Node Types
* Table of Contents
{:toc}
# Node Types
## Overview

View file

@ -1,8 +1,5 @@
# CORE Performance
* Table of Contents
{:toc}
## Overview
The top question about the performance of CORE is often *how many nodes can it
@ -16,7 +13,6 @@ handle?* The answer depends on several factors:
| Network traffic | the more packets that are sent around the virtual network increases the amount of CPU usage. |
| GUI usage | widgets that run periodically, mobility scenarios, and other GUI interactions generally consume CPU cycles that may be needed for emulation. |
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running Linux,
we have found it reasonable to run 30-75 nodes running OSPFv2 and OSPFv3
routing. On this hardware CORE can instantiate 100 or more nodes, but at

View file

@ -1,8 +1,5 @@
# Python API
* Table of Contents
{:toc}
## Overview
Writing your own Python scripts offers a rich programming environment with

View file

@ -1,9 +1,6 @@
# CORE Services
# Services (Deprecated)
* Table of Contents
{:toc}
## Services
## Overview
CORE uses the concept of services to specify what processes or scripts run on a
node when it is started. Layer-3 nodes such as routers and PCs are defined by
@ -16,8 +13,8 @@ configuration files, startup index, starting commands, validation commands,
shutdown commands, and meta-data associated with a node.
> **NOTE:** **Network namespace nodes do not undergo the normal Linux boot process**
using the **init**, **upstart**, or **systemd** frameworks. These
lightweight nodes use configured CORE *services*.
> using the **init**, **upstart**, or **systemd** frameworks. These
> lightweight nodes use configured CORE *services*.
## Available Services
@ -72,10 +69,10 @@ The dialog has three tabs for configuring the different aspects of the service:
files, directories, and startup/shutdown.
> **NOTE:** A **yellow** customize icon next to a service indicates that service
requires customization (e.g. the *Firewall* service).
A **green** customize icon indicates that a custom configuration exists.
Click the *Defaults* button when customizing a service to remove any
customizations.
> requires customization (e.g. the *Firewall* service).
> A **green** customize icon indicates that a custom configuration exists.
> Click the *Defaults* button when customizing a service to remove any
> customizations.
The Files tab is used to display or edit the configuration files or scripts that
are used for this service. Files can be selected from a drop-down list, and
@ -91,8 +88,8 @@ the Zebra service, because Quagga running on each node needs to write separate
PID files to that directory.
> **NOTE:** The **/var/log** and **/var/run** directories are
mounted uniquely per-node by default.
Per-node mount targets can be found in **/tmp/pycore.nnnnn/nN.conf/**
> mounted uniquely per-node by default.
> Per-node mount targets can be found in **/tmp/pycore.nnnnn/nN.conf/**
(where *nnnnn* is the session number and *N* is the node number.)
The Startup/shutdown tab lists commands that are used to start and stop this
@ -121,7 +118,7 @@ produces a non-zero return value, an exception is generated, which will cause
an error to be displayed in the Check Emulation Light.
> **NOTE:** To start, stop, and restart services during run-time, right-click a
node and use the *Services...* menu.
> node and use the *Services...* menu.
## New Services

56
mkdocs.yml Normal file
View file

@ -0,0 +1,56 @@
site_name: CORE Documentation
use_directory_urls: false
theme:
name: material
palette:
- scheme: slate
toggle:
icon: material/brightness-4
name: Switch to Light Mode
primary: teal
accent: teal
- scheme: default
toggle:
icon: material/brightness-7
name: Switch to Dark Mode
primary: teal
accent: teal
features:
- navigation.path
- navigation.instant
- navigation.footer
- content.code.copy
markdown_extensions:
- pymdownx.snippets:
base_path: docs
- admonition
- pymdownx.details
- pymdownx.superfences
- pymdownx.tabbed:
alternate_style: true
- pymdownx.inlinehilite
nav:
- Home: index.md
- Overview:
- Architecture: architecture.md
- Performance: performance.md
- Installation:
- Overview: install.md
- Ubuntu: install_ubuntu.md
- CentOS: install_centos.md
- Detailed Topics:
- GUI: gui.md
- API:
- Python: python.md
- gRPC: grpc.md
- Services:
- Config Services: configservices.md
- Services (Deprecated): services.md
- EMANE: emane.md
- Node Types:
- Overview: nodetypes.md
- Docker: docker.md
- LXC: lxc.md
- Distributed: distributed.md
- Control Network: ctrlnet.md
- Developers Guide: devguide.md