docs: update node types to include lxc/docker type documentation, instead of being hidden within examples

This commit is contained in:
Blake Harnden 2021-11-11 10:59:11 -08:00
parent 1ce6e51318
commit b78c07bd24
5 changed files with 58 additions and 46 deletions

View file

@ -1,5 +0,0 @@
{
"bridge": "none",
"iptables": false
}

View file

@ -1,28 +1,40 @@
# Docker Support # Docker Node Support
Information on how Docker can be leveraged and included to create ## Overview
nodes based on Docker containers and images to interface with
existing CORE nodes, when needed. Provided below is some information for helping setup and use Docker
nodes within a CORE scenario.
## Installation ## Installation
### Debian Systems
```shell ```shell
sudo apt install docker.io sudo apt install docker.io
``` ```
### RHEL Systems
## Configuration ## Configuration
Custom configuration required to avoid iptable rules being added and removing Custom configuration required to avoid iptable rules being added and removing
the need for the default docker network, since core will be orchestrating the need for the default docker network, since core will be orchestrating
connections between nodes. connections between nodes.
Place the file below in **/etc/docker/** Place the file below in **/etc/docker/docker.json**
* daemon.json
```json
{
"bridge": "none",
"iptables": false
}
```
## Group Setup ## Group Setup
To use Docker nodes within the python GUI, you will need to make sure the user running the GUI is a member of the To use Docker nodes within the python GUI, you will need to make sure the
docker group. user running the GUI is a member of the docker group.
```shell ```shell
# add group if does not exist # add group if does not exist
@ -35,18 +47,10 @@ sudo usermod -aG docker $USER
newgrp docker newgrp docker
``` ```
## Tools and Versions Tested With ## Image Requirements
* Docker version 18.09.5, build e8ff056 Images used by Docker nodes in CORE need to have networking tools installed for
* nsenter from util-linux 2.31.1 CORE to automate setup and configuration of the network within the container.
## Examples
This directory provides a few small examples creating Docker nodes
and linking them to themselves or with standard CORE nodes.
Images used by nodes need to have networking tools installed for CORE to automate
setup and configuration of the container.
Example Dockerfile: Example Dockerfile:
``` ```
@ -59,3 +63,8 @@ Build image:
```shell ```shell
sudo docker build -t <name> . sudo docker build -t <name> .
``` ```
## Tools and Versions Tested With
* Docker version 18.09.5, build e8ff056
* nsenter from util-linux 2.31.1

View file

@ -20,15 +20,15 @@ networking scenarios, security studies, and increasing the size of physical test
| Topic | Description| | Topic | Description|
|-------|------------| |-------|------------|
|[Architecture](architecture.md)|Overview of the architecture|
|[Installation](install.md)|How to install CORE and its requirements| |[Installation](install.md)|How to install CORE and its requirements|
|[Architecture](architecture.md)|Overview of the architecture|
|[Node Types](nodetypes.md)|Overview of node types supported within CORE|
|[GUI](gui.md)|How to use the GUI| |[GUI](gui.md)|How to use the GUI|
|[(BETA) Python GUI](pygui.md)|How to use the BETA python based GUI| |[(BETA) Python GUI](pygui.md)|How to use the BETA python based GUI|
|[Python API](python.md)|Covers how to control core directly using python| |[Python API](python.md)|Covers how to control core directly using python|
|[gRPC API](grpc.md)|Covers how control core using gRPC| |[gRPC API](grpc.md)|Covers how control core using gRPC|
|[Distributed](distributed.md)|Details for running CORE across multiple servers| |[Distributed](distributed.md)|Details for running CORE across multiple servers|
|[Node Types](nodetypes.md)|Overview of node types supported within CORE| |[Control Network](ctrlnet.md)|How to use control networks to communicate with nodes from host|
|[CTRLNET](ctrlnet.md)|How to use control networks to communicate with nodes from host|
|[Services](services.md)|Overview of provided services and creating custom ones| |[Services](services.md)|Overview of provided services and creating custom ones|
|[EMANE](emane.md)|Overview of EMANE integration and integrating custom EMANE models| |[EMANE](emane.md)|Overview of EMANE integration and integrating custom EMANE models|
|[Performance](performance.md)|Notes on performance when using CORE| |[Performance](performance.md)|Notes on performance when using CORE|

View file

@ -1,11 +1,12 @@
# LXD Support # LXC Support
Information on how LXD can be leveraged and included to create LXC nodes are provided by way of LXD to create nodes using predefined
nodes based on LXC containers and images to interface with images and provide file system separation.
existing CORE nodes, when needed.
## Installation ## Installation
### Debian Systems
```shell ```shell
sudo snap install lxd sudo snap install lxd
``` ```
@ -38,8 +39,3 @@ newgrp lxd
* LXD 3.14 * LXD 3.14
* nsenter from util-linux 2.31.1 * nsenter from util-linux 2.31.1
## Examples
This directory provides a few small examples creating LXC nodes
using LXD and linking them to themselves or with standard CORE nodes.

View file

@ -5,18 +5,30 @@
## Overview ## Overview
Different node types can be configured in CORE, and each node type has a Different node types can be used within CORE, each with their own
*machine type* that indicates how the node will be represented at run time. tradeoffs and functionality.
Different machine types allow for different options.
## Netns Nodes ## CORE Nodes
The *netns* machine type is the default. This is for nodes that will be CORE nodes are the standard node type typically used in CORE. They are
backed by Linux network namespaces. This machine type uses very little backed by Linux network namespaces. They use very little system resources
system resources in order to emulate a network. Another reason this is in order to emulate a network. They do however share the hosts file system
designated as the default machine type is because this technology typically as they do not get their own. CORE nodes will have a directory uniquely
requires no changes to the kernel; it is available out-of-the-box from the created for them as a place to keep their files and mounted directories
latest mainstream Linux distributions. (`/tmp/pycore.<session id>/<node name.conf`),
which will usually be wiped and removed upon shutdown.
## Docker Nodes
Docker nodes provide a convenience for running nodes using predefind images
and filesystems that CORE nodes do not provide. Details for using Docker
nodes can be found [here](docker.md).
## LXC Nodes
LXC nodes provide a convenience for running nodes using predefind images
and filesystems that CORE nodes do not provide. Details for using LXC
nodes can be found [here](lxc.md).
## Physical Nodes ## Physical Nodes