Compare commits

..

3 commits

Author SHA1 Message Date
Huy Pham
c42d0161b0 touch up 2020-05-07 10:08:58 -07:00
Huy Pham
002caf09bf touch up 2020-05-07 10:01:03 -07:00
Huy Pham
9bb7902060 implement multiple node service configuration 2020-05-06 22:40:34 -07:00
620 changed files with 72964 additions and 33703 deletions

View file

@ -4,38 +4,39 @@ on: [push]
jobs: jobs:
build: build:
runs-on: ubuntu-22.04 runs-on: ubuntu-18.04
steps: steps:
- uses: actions/checkout@v1 - uses: actions/checkout@v1
- name: Set up Python 3.9 - name: Set up Python 3.6
uses: actions/setup-python@v1 uses: actions/setup-python@v1
with: with:
python-version: 3.9 python-version: 3.6
- name: install poetry - name: Install pipenv
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install poetry pip install pipenv
cd daemon cd daemon
cp setup.py.in setup.py
cp core/constants.py.in core/constants.py cp core/constants.py.in core/constants.py
sed -i 's/required=True/required=False/g' core/emulator/coreemu.py sed -i 's/True/False/g' core/constants.py
poetry install pipenv sync --dev
- name: isort - name: isort
run: | run: |
cd daemon cd daemon
poetry run isort -c -df pipenv run isort -c -df
- name: black - name: black
run: | run: |
cd daemon cd daemon
poetry run black --check . pipenv run black --check --exclude ".+_pb2.*.py|doc|build|utm\.py|setup\.py" .
- name: flake8 - name: flake8
run: | run: |
cd daemon cd daemon
poetry run flake8 pipenv run flake8
- name: grpc - name: grpc
run: | run: |
cd daemon/proto cd daemon/proto
poetry run python -m grpc_tools.protoc -I . --python_out=.. --grpc_python_out=.. core/api/grpc/*.proto pipenv run python -m grpc_tools.protoc -I . --python_out=.. --grpc_python_out=.. core/api/grpc/*.proto
- name: test - name: test
run: | run: |
cd daemon cd daemon
poetry run pytest --mock tests pipenv run test --mock

View file

@ -1,21 +0,0 @@
name: documentation
on:
push:
branches:
- master
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: 3.x
- uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- run: pip install mkdocs-material
- run: mkdocs gh-deploy --force

11
.gitignore vendored
View file

@ -14,13 +14,9 @@ config.h.in
config.log config.log
config.status config.status
configure configure
configure~
debian debian
stamp-h1 stamp-h1
# python virtual environments
venv
# generated protobuf files # generated protobuf files
*_pb2.py *_pb2.py
*_pb2_grpc.py *_pb2_grpc.py
@ -43,7 +39,6 @@ coverage.xml
# python files # python files
*.egg-info *.egg-info
*.pyc
# ignore package files # ignore package files
*.rpm *.rpm
@ -60,8 +55,8 @@ coverage.xml
netns/setup.py netns/setup.py
daemon/setup.py daemon/setup.py
# ignore corefx build
corefx/target
# python # python
__pycache__ __pycache__
# ignore core player files
*.core

View file

@ -1,432 +1,3 @@
## 2023-08-01 CORE 9.0.3
* Installation
* updated various dependencies
* Documentation
* improved GUI docs to include node interaction and note xhost usage
* \#780 - fixed gRPC examples
* \#787 - complete documentation revamp to leverage mkdocs material
* \#790 - fixed custom emane model example
* core-daemon
* update type hinting to avoid deprecated imports
* updated commands ran within docker based nodes to have proper environment variables
* fixed issue improperly setting session options over gRPC
* \#668 - add fedora sbin path to frr service
* \#774 - fixed pcap configservice
* \#805 - fixed radvd configservice template error
* core-gui
* update type hinting to avoid deprecated imports
* fixed issue allowing duplicate named hook scripts
* fixed issue joining sessions with RJ45 nodes
* utility scripts
* fixed issue in core-cleanup for removing devices
## 2023-03-02 CORE 9.0.2
* Installation
* updated python dependencies, including invoke to resolve python 3.10+ issues
* improved example dockerfiles to use less space for built images
* Documentation
* updated emane install instructions
* added Docker related issues to install instructions
* core-daemon
* fixed issue using invalid device name in sysctl commands
* updated PTP nodes to properly disable mac learning for their linux bridge
* fixed issue for LXC nodes to properly use a configured image name and write it to XML
* \#742 - fixed issue with bad wlan node id being used
* \#744 - fixed issue not properly setting broadcast address
* core-gui
* fixed sample1.xml to remove SSH service
* fixed emane demo examples
* fixed issue displaying emane configs generally configured for a node
## 2022-11-28 CORE 9.0.1
* Installation
* updated protobuf and grpcio-tools versions in pyproject.toml to account for bad version mix
## 2022-11-18 CORE 9.0.0
* Breaking Changes
* removed session nodes file
* removed session state file
* emane now runs in one process per nem with unique control ports
* grpc client has been refactored and updated
* removed tcl/legacy gui, imn file support and the tlv api
* link configuration is now different, but consistent, for wired links
* Installation
* added packaging for single file distribution
* python3.9 is now the minimum required version
* updated Dockerfile examples
* updated various python dependencies
* virtual environment is now installed to /opt/core/venv
* Documentation
* updated emane invoke task examples
* revamped install documentation
* added wireless node notes
* core-gui
* updated config services to display rendered templated and allow editing
* fixed node icon issue when updating preferences
* \#89 - throughput widget now works for hubs/switches
* \#691 - fixed custom nodes to properly use config services
* gRPC API
* add linked call to support linking and unlinking interfaces without destroying them
* fixed issue during start session clearing out session options
* added call to get rendered config service files
* removed get_node_links from links from client
* nem id and nem port have been added to GetNode and AddLink calls
* core-daemon
* wired links always create two veth pairs joined by a bridge
* node interfaces are now configured within the container to apply to outgoing traffic
* session.add_node now uses NodeOptions, allowing for node specific options
* fixed issue with xml reading node canvas values
* removed Session.add_node_file
* fixed get requirements logic
* fixed docker/lxd node support terminal commands on remote servers
* improved docker node command execution time using nsenter
* new wireless node type added to support dynamic loss based on distance
* \#513 - add and deleting distributed links during runtime is now supported
* \#703 - fixed issue not starting emane event listening service
## 2022-03-21 CORE 8.2.0
* core-gui
* improved failed starts to trigger runtime to allow node investigation
* core-daemon
* improved default service loading to use a full import path
* updated session instantiation to always set to a runtime state
* core-cli
* \#672 - fixed xml loading
* \#578 - restored json flag and added geo output to session overview
* Documentation
* updated emane example and documentation
* improved table markdown
## 2022-02-18 CORE 8.1.0
* Installation
* updated dependency versions to account for known vulnerabilities
* GUI
* fixed issue drawing asymmetric link configurations when joining a session
* daemon
* fixed issue getting templates and creating files for config services
* added by directional support for network to network links
* \#647 - fixed issue when creating RJ45 nodes
* \#646 - fixed issue when creating files for Docker nodes
* \#645 - improved wlan change updates to account for all updates with no delay
* services
* fixed file generation for OSPFv2 config service
## 2022-01-12 CORE 8.0.0
*Breaking Changes
* heavily refactored gRPC client, removing some calls, adding others, all using type hinted classes representing their protobuf counterparts
* emane adjustments to run each nem in its own process, includes adjustments to configuration, which may cause issues
* internal daemon cleanup and refactoring, in a script directly driving a scenario is used
* Installation
* added options to allow installation without ospf mdr
* removed tasks that are no longer needed
* updates to properly install/remove example files
* pipx/poetry/invoke versions are now locked to help avoid update related issues
* install.sh is now setup.sh and is a convenience to get tool setup to run invoke
* Documentation
* formally added notes for Docker and LXD based node types
* added config services
* Updated README to have quick notes for installation
* \#563 - update to note how to enable core service
* Examples
* \#598 - update to fix sample1.imn to working order
* core-daemon
* emane global configuration is now configurable per nem
* fixed wlan loss to support float values
* improved default service loading to use full core path
* improved emane model loading to occur one time
* fixed handling rj45 link edits from tlv api
* fixed wlan config getting a default value for the promiscuous setting when not provided
* ebtables usage has now been replaced with nftables
* \#564 - logging is now using module named loggers
* \#573 - emane processes are not created 1 to 1 with nems
* \#608 - update lxml version
* \#609 - update pyyaml version
* \#623 - fixed issue with ovs mode and mac learning
* core-gui
* config services are now the default service type
* legacy services are marked as deprecated
* fix to properly load session options
* logging is now using module named loggers
* save as will not update the current session file name as expected
* fix to properly clear out removed customized services
* adding directories to a service that do not exist, is now valid
* added flag to exit after creating gui directory from command line
* added new options to enable/disable ip4/ip6 assignment
* improved canvas draw order, when joining sessions
* improved node copy/paste to avoid issues when pasting text into service config dialogs
* each canvas will not correctly save and load their size from xml
* gRPC API
* session options are now returned for GetSession
* fixed issue not properly creating the session directory during start session definition state
* updates to separate editing a node and moving a node, new MoveNode call added, EditNode is now used for editing icons
* Services
* fixed default route config service
* config services now have options for shadowing directories, including per node customization
## 2021-09-17 CORE 7.5.2
* Installation
* \#596 - fixes issue related to installing poetry by pinning version to 1.1.7
* updates pipx installation to pinned version 0.16.4
* core-daemon
* \#600 - fixes known vulnerability for pillow dependency by updating version
## 2021-04-15 CORE 7.5.1
* core-pygui
* fixed issues creating and drawing custom nodes
## 2021-03-11 CORE 7.5.0
* core-daemon
* fixed issue setting mobility loop value properly
* fixed issue that some states would not properly remove session directories
* \#560 - fixed issues with sdt integration for mobility movement and layer creation
* core-pygui
* added multiple canvas support
* added support to hide nodes and restore them visually
* update to assign full netmasks to wireless connected nodes by default
* update to display services and action controls for nodes during runtime
* fixed issues with custom nodes
* fixed issue auto assigning macs, avoiding duplication
* fixed issue joining session with different netmasks
* fixed issues when deleting a session from the sessions dialog
* \#550 - fixed issue not sending all service customization data
* core-cli
* added delete session command
## 2021-01-11 CORE 7.4.0
* Installation
* fixed issue for automated install assuming ID_LIKE is always present in /etc/os-release
* gRPC API
* fixed issue stopping session and not properly going to data collect state
* fixed issue to have start session properly create a directory before configuration state
* core-pygui
* fixed issue handling deletion of wired link to a switch
* avoid saving edge metadata to xml when values are default
* fixed issue editing node mac addresses
* added support for configuring interface names
* fixed issue with potential node names to allow hyphens and remove under bars
* \#531 - fixed issue changing distributed nodes back to local
* core-daemon
* fixed issue to properly handle deleting links from a network to network node
* updated xml to support writing and reading link buffer configurations
* reverted change and removed mac learning from wlan, due to promiscuous like behavior
* fixed issue creating control interfaces when starting services
* fixed deadlock issue when clearing a session using sdt
* \#116 - fixed issue for wlans handling multiple mobility scripts at once
* \#539 - fixed issue in udp tlv api
## 2020-12-02 CORE 7.3.0
* core-daemon
* fixed issue where emane global configuration was not being sent to core-gui
* updated controlnet names on host to be prefixed with ctrl
* fixed RJ45 link shutdown from core-gui causing an error
* fixed emane external transport xml generation
* \#517 - update to account for radvd required directory
* \#514 - support added for session specific environment files
* \#529 - updated to configure netem limit based on delay or user specified, requires kernel 3.3+
* core-pygui
* fixed issue drawing wlan/emane link options when it should not have
* edge labels are now placed a set distance from nodes like original gui
* link color/width are now saved to xml files
* added support to configure buffer size for links
* \#525 - added support for multiple wired links between the same nodes
* \#526 - added option to hide/show links with 100% loss
* Documentation
* \#527 - typo in service documentation
* \#515 - added examples to docs for using EMANE features within a CORE context
## 2020-09-29 CORE 7.2.1
* core-daemon
* fixed issue where shutting down sessions may not have removed session directories
* fixed issue with multiple emane interfaces on the same node not getting the right configuration
* Installation
* updated automated install to be a bit more robust for alternative distros
* added force install type to try and leverage a redhat/debian like install
* locked ospf mdr version installed to older commit to avoid issues with multiple interfaces on same node
## 2020-09-15 CORE 7.2.0
* Installation
* locked down version of ospf-mdr installed in automated install
* locked down version of emane to v1.2.5 in automated emane install
* added option to install locally using the -l option
* core-daemon
* improve error when retrieving services that do not exist, or failed to load
* fixed issue with writing/reading emane node interface configurations to xml
* fixed issue with not setting the emane model when creating a node
* added common utility method for getting a emane node interface config id in core.utils
* fixed issue running emane on more than one interface for a node
* fixed issue validating paths when creating emane transport xml for a node
* fixed issue avoiding multiple calls to shutdown, if already in shutdown state
* core-pygui
* fixed issue configuring emane for a node interface
* gRPC API
* added wrapper client that can provide type hinting and a simpler interface at core.api.grpc.clientw
* fixed issue creating sessions that default to having a very large reference scale
* fixed issue with GetSession returning control net nodes
## 2020-08-21 CORE 7.1.0
* Installation
* added core-python script that gets installed to help globally reference the virtual environment
* gRPC API
* GetSession will now return all configuration information for a session and the file it was opened from, if applicable
* node update events will now include icon information
* fixed issue with getting session throughputs for sessions with a high id
* core-daemon
* \#503 - EMANE networks will now work with mobility again
* \#506 - fixed service dependency resolution issue
* fixed issue sending hooks to core-gui when joining session
* core-pygui
* fixed issues editing hooks
* fixed issue with cpu usage when joining a session
* fixed mac field not being disabled during runtime when configuring a node
* removed unlimited button from link config dialog
* fixed issue with copy/paste links and their options
* fixed issue with adding nodes/links and editing links during runtime
* updated open file dialog in config dialogs to open to ~/.coregui home directory
* fixed issue double clicking sessions dialog in invalid areas
* added display of asymmetric link options on links
* fixed emane config dialog display
* fixed issue saving backgrounds in xml files
* added view toggle for wired/wireless links
* node events will now update icons
## 2020-07-28 CORE 7.0.1
* Bugfixes
* \#500 - fixed issue running node commands with shell=True
* fixed issue for poetry based install not properly vetting requirements for dataclasses dependency
## 2020-07-23 CORE 7.0.0
* Breaking Changes
* core.emudata and core.data combined and cleaned up into core.data
* updates to consistently use mac instead of hwaddr/mac
* \#468 - code related to adding/editing/deleting links cleaned up
* \#469 - usages of per all changed to loss to be consistent
* \#470 - variables with numbered names now use numbers directly
* \#471 - node startup is no longer embedded within its constructor
* \#472 - code updated to refer to interfaces consistently as iface
* \#475 - code updates changing how ip addresses are stored on interfaces
* \#476 - executables to check for moved into own module core.executables
* \#486 - core will now install into its own python virtual environment managed by poetry
* core-daemon
* updates to properly save/load distributed servers to xml
* \#474 - added type hinting to all service files
* \#478 - fixed typo in config service directory
* \#479 - opening an xml file will now cycle through states like a normal session
* \#480 - ovs configuration will now save/load from xml and display in guis
* \#484 - changes to support adding emane links during runtime
* core-pygui
* fixed issue not displaying services for the default group in service dialogs
* fixed issue starting a session when the daemon is not present
* fixed issue attempting to open terminals for invalid nodes
* fixed issue syncing session location
* fixed issue joining a session with mobility, not in runtime
* added cpu usage monitor to status bar
* emane configurations can now be seen during runtime
* rj45 nodes can only have one link
* disabling throughputs will clear labels
* improvements to custom service copy
* link options will now be drawn on as a label
* updates to handle runtime link events
* \#477 - added optional details pane for a quick view of node/link details
* \#485 - pygui fixed observer widget for invalid nodes
* \#496 - improved alert handling
* core-gui
* \#493 - increased frame size to show all emane configuration options
* gRPC API
* added set session user rpc
* added cpu usage stream
* interface objects returned from get_node will now provide node_id, net_id, and net2_id data
* peer to peer nodes will not be included in get_session calls
* pathloss events will now throw an error when nem id not found
* \#481 - link rpc calls will broadcast out
* \#496 - added alert rpc call
* Services
* fixed issue reading files in security services
* \#494 - add staticd to daemons list for frr services
## 2020-06-11 CORE 6.5.0
* Breaking Changes
* CoreNode.newnetif - both parameters are required and now takes an InterfaceData object as its second parameter
* CoreNetworkBase.linkconfig - now takes a LinkOptions parameter instead of a subset of some of the options (ie bandwidth, delay, etc)
* \#453 - Session.add_node and Session.get_node now requires the node class you expect to create/retrieve
* \#458 - rj45 cleanup to only inherit from one class
* Enhancements
* fixed issues with handling bad commands for TLV execute messages
* removed unused boot.sh from CoreNode types
* added linkconfig to CoreNetworkBase and cleaned up function signature
* emane position hook now saves geo position to node
* emane pathloss support
* core.emulator.emudata leveraged dataclass and type hinting
* \#459 - updated transport type usage to an enum
* \#460 - updated network policy type usage to an enum
* Python GUI Enhancements
* fixed throughput events do not work for joined sessions
* fixed exiting app with a toolbar picker showing
* fixed issue with creating interfaces and reusing subnets after deletion
* fixed issue with moving text shapes
* fixed scaling with custom node selected
* fixed toolbar state switching issues
* enable/disable toolbar when running stop/start
* marker config integrated into toolbar
* improved color picker layout
* shapes can now be moved while drawing shapes
* added observers to toolbar in run mode
* gRPC API
* node events will now have geo positional data
* node geo data is now returned in get_session and get_node calls
* \#451 - added wlan link api to allow direct linking/unlinking of wireless links between nodes
* \#462 - added streaming call for sending node position/geo changes
* \#463 - added streaming call for emane pathloss events
* Bugfixes
* \#454 - fixed issue creating docker nodes, but containers are now required to have networking tools
* \#466 - fixed issue in python gui when xml file is loading nodes with no ip4 addresses
## 2020-05-11 CORE 6.4.0
* Enhancements
* updates to core-route-monitor, allow specific session, configurable settings, and properly
listen on all interfaces
* install.sh now has a "-r" option to help with reinstalling from current branch and installing
current python dependencies
* \#202 - enable OSPFv2 fast convergence
* \#178 - added comments to OVS service
* Python GUI Enhancements
* added initial documentation to help support usage
* supports drawing multiple links for wireless connections
* supports differentiating wireless networks with different colored links
* implemented unlink in node context menu to delete links to other nodes
* implemented node run tool dialog
* implemented find node dialog
* implemented address configuration dialog
* implemented mac configuration dialog
* updated link address creation to more closely mimic prior behavior
* updated configuration to use yaml class based configs
* implemented auto grid layout for nodes
* fixed drawn wlan ranges during configuration
* Bugfixes
* no longer writes link option data for WLAN/EMANE links in XML
* avoid configuring links for WLAN/EMANE link options in XML, due to them being written to XML prior
* updates to allow building python docs again
* \#431 - peer to peer node uplink link data was not using an enum properly due to code changes
* \#432 - loading XML was not setting EMANE nodes model
* \#435 - loading XML was not maintaining existing session options
* \#448 - fixed issue sorting hooks being saved to XML
## 2020-04-13 CORE 6.3.0 ## 2020-04-13 CORE 6.3.0
* Features * Features
* \#424 - added FRR IS-IS service * \#424 - added FRR IS-IS service

View file

@ -1,126 +0,0 @@
# syntax=docker/dockerfile:1
FROM ubuntu:22.04
LABEL Description="CORE Docker Ubuntu Image"
ARG PREFIX=/usr/local
ARG BRANCH=master
ARG PROTOC_VERSION=3.19.6
ARG VENV_PATH=/opt/core/venv
ENV DEBIAN_FRONTEND=noninteractive
ENV PATH="$PATH:${VENV_PATH}/bin"
WORKDIR /opt
# install system dependencies
RUN apt-get update -y && \
apt-get install -y software-properties-common
RUN add-apt-repository "deb http://archive.ubuntu.com/ubuntu jammy universe"
RUN apt-get update -y && \
apt-get install -y --no-install-recommends \
automake \
bash \
ca-certificates \
ethtool \
gawk \
gcc \
g++ \
iproute2 \
iputils-ping \
libc-dev \
libev-dev \
libreadline-dev \
libtool \
nftables \
python3 \
python3-pip \
python3-tk \
pkg-config \
tk \
xauth \
xterm \
wireshark \
vim \
build-essential \
nano \
firefox \
net-tools \
rsync \
openssh-server \
openssh-client \
vsftpd \
atftpd \
atftp \
mini-httpd \
lynx \
tcpdump \
iperf \
iperf3 \
tshark \
openssh-sftp-server \
bind9 \
bind9-utils \
openvpn \
isc-dhcp-server \
isc-dhcp-client \
whois \
ipcalc \
socat \
hping3 \
libgtk-3-0 \
librest-0.7-0 \
libgtk-3-common \
dconf-gsettings-backend \
libsoup-gnome2.4-1 \
libsoup2.4-1 \
dconf-service \
x11-xserver-utils \
ftp \
git \
sudo \
wget \
tzdata \
libpcap-dev \
libpcre3-dev \
libprotobuf-dev \
libxml2-dev \
protobuf-compiler \
unzip \
uuid-dev \
iproute2 \
vlc \
iputils-ping && \
apt-get autoremove -y
# install core
RUN git clone https://github.com/coreemu/core && \
cd core && \
git checkout ${BRANCH} && \
./setup.sh && \
PATH=/root/.local/bin:$PATH inv install -v -p ${PREFIX} && \
cd /opt && \
rm -rf ospf-mdr
# install emane
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-linux-x86_64.zip && \
mkdir protoc && \
unzip protoc-${PROTOC_VERSION}-linux-x86_64.zip -d protoc && \
git clone https://github.com/adjacentlink/emane.git && \
cd emane && \
./autogen.sh && \
./configure --prefix=/usr && \
make -j$(nproc) && \
make install && \
cd src/python && \
make clean && \
PATH=/opt/protoc/bin:$PATH make && \
${VENV_PATH}/bin/python -m pip install . && \
cd /opt && \
rm -rf protoc && \
rm -rf emane && \
rm -f protoc-${PROTOC_VERSION}-linux-x86_64.zip
WORKDIR /root
CMD /opt/core/venv/bin/core-daemon

View file

@ -6,8 +6,12 @@ if WANT_DOCS
DOCS = docs man DOCS = docs man
endif endif
if WANT_GUI
GUI = gui
endif
if WANT_DAEMON if WANT_DAEMON
DAEMON = daemon DAEMON = scripts daemon
endif endif
if WANT_NETNS if WANT_NETNS
@ -15,13 +19,12 @@ if WANT_NETNS
endif endif
# keep docs last due to dependencies on binaries # keep docs last due to dependencies on binaries
SUBDIRS = $(DAEMON) $(NETNS) $(DOCS) SUBDIRS = $(GUI) $(DAEMON) $(NETNS) $(DOCS)
ACLOCAL_AMFLAGS = -I config ACLOCAL_AMFLAGS = -I config
# extra files to include with distribution tarball # extra files to include with distribution tarball
EXTRA_DIST = bootstrap.sh \ EXTRA_DIST = bootstrap.sh \
package \
LICENSE \ LICENSE \
README.md \ README.md \
ASSIGNMENT_OF_COPYRIGHT.pdf \ ASSIGNMENT_OF_COPYRIGHT.pdf \
@ -41,6 +44,58 @@ DISTCLEANFILES = aclocal.m4 \
MAINTAINERCLEANFILES = .version \ MAINTAINERCLEANFILES = .version \
.version.date .version.date
define fpm-rpm =
fpm -s dir -t rpm -n core \
-m "$(PACKAGE_MAINTAINERS)" \
--license "BSD" \
--description "Common Open Research Emulator" \
--url https://github.com/coreemu/core \
--vendor "$(PACKAGE_VENDOR)" \
-p core_VERSION_ARCH.rpm \
-v $(PACKAGE_VERSION) \
--rpm-init scripts/core-daemon \
--config-files "/etc/core" \
-d "ethtool" \
-d "tcl" \
-d "tk" \
-d "procps-ng" \
-d "bash >= 3.0" \
-d "ebtables" \
-d "iproute" \
-d "libev" \
-d "net-tools" \
-d "python3 >= 3.6" \
-d "python3-tkinter" \
-C $(DESTDIR)
endef
define fpm-deb =
fpm -s dir -t deb -n core \
-m "$(PACKAGE_MAINTAINERS)" \
--license "BSD" \
--description "Common Open Research Emulator" \
--url https://github.com/coreemu/core \
--vendor "$(PACKAGE_VENDOR)" \
-p core_VERSION_ARCH.deb \
-v $(PACKAGE_VERSION) \
--deb-systemd scripts/core-daemon.service \
--deb-no-default-config-files \
--config-files "/etc/core" \
-d "ethtool" \
-d "tcl" \
-d "tk" \
-d "libtk-img" \
-d "procps" \
-d "libc6 >= 2.14" \
-d "bash >= 3.0" \
-d "ebtables" \
-d "iproute2" \
-d "libev4" \
-d "python3 >= 3.6" \
-d "python3-tk" \
-C $(DESTDIR)
endef
define fpm-distributed-deb = define fpm-distributed-deb =
fpm -s dir -t deb -n core-distributed \ fpm -s dir -t deb -n core-distributed \
-m "$(PACKAGE_MAINTAINERS)" \ -m "$(PACKAGE_MAINTAINERS)" \
@ -48,19 +103,18 @@ fpm -s dir -t deb -n core-distributed \
--description "Common Open Research Emulator Distributed Package" \ --description "Common Open Research Emulator Distributed Package" \
--url https://github.com/coreemu/core \ --url https://github.com/coreemu/core \
--vendor "$(PACKAGE_VENDOR)" \ --vendor "$(PACKAGE_VENDOR)" \
-p core-distributed_VERSION_ARCH.deb \ -p core_distributed_VERSION_ARCH.deb \
-v $(PACKAGE_VERSION) \ -v $(PACKAGE_VERSION) \
-d "ethtool" \ -d "ethtool" \
-d "procps" \ -d "procps" \
-d "libc6 >= 2.14" \ -d "libc6 >= 2.14" \
-d "bash >= 3.0" \ -d "bash >= 3.0" \
-d "nftables" \ -d "ebtables" \
-d "iproute2" \ -d "iproute2" \
-d "libev4" \ -d "libev4" \
-d "openssh-server" \ -d "openssh-server" \
-d "xterm" \ -d "xterm" \
netns/vnoded=/usr/bin/ \ -C $(DESTDIR)
netns/vcmd=/usr/bin/
endef endef
define fpm-distributed-rpm = define fpm-distributed-rpm =
@ -70,86 +124,29 @@ fpm -s dir -t rpm -n core-distributed \
--description "Common Open Research Emulator Distributed Package" \ --description "Common Open Research Emulator Distributed Package" \
--url https://github.com/coreemu/core \ --url https://github.com/coreemu/core \
--vendor "$(PACKAGE_VENDOR)" \ --vendor "$(PACKAGE_VENDOR)" \
-p core-distributed_VERSION_ARCH.rpm \ -p core_distributed_VERSION_ARCH.rpm \
-v $(PACKAGE_VERSION) \ -v $(PACKAGE_VERSION) \
-d "ethtool" \ -d "ethtool" \
-d "procps-ng" \ -d "procps-ng" \
-d "bash >= 3.0" \ -d "bash >= 3.0" \
-d "nftables" \ -d "ebtables" \
-d "iproute" \ -d "iproute" \
-d "libev" \ -d "libev" \
-d "net-tools" \ -d "net-tools" \
-d "openssh-server" \ -d "openssh-server" \
-d "xterm" \ -d "xterm" \
netns/vnoded=/usr/bin/ \ -C $(DESTDIR)
netns/vcmd=/usr/bin/
endef
define fpm-rpm =
fpm -s dir -t rpm -n core \
-m "$(PACKAGE_MAINTAINERS)" \
--license "BSD" \
--description "core vnoded/vcmd and system dependencies" \
--url https://github.com/coreemu/core \
--vendor "$(PACKAGE_VENDOR)" \
-p core_VERSION_ARCH.rpm \
-v $(PACKAGE_VERSION) \
--rpm-init package/core-daemon \
--after-install package/after-install.sh \
--after-remove package/after-remove.sh \
-d "ethtool" \
-d "tk" \
-d "procps-ng" \
-d "bash >= 3.0" \
-d "ebtables" \
-d "iproute" \
-d "libev" \
-d "net-tools" \
-d "nftables" \
netns/vnoded=/usr/bin/ \
netns/vcmd=/usr/bin/ \
package/etc/core.conf=/etc/core/ \
package/etc/logging.conf=/etc/core/ \
package/examples=/opt/core/ \
daemon/dist/core-$(PACKAGE_VERSION)-py3-none-any.whl=/opt/core/
endef
define fpm-deb =
fpm -s dir -t deb -n core \
-m "$(PACKAGE_MAINTAINERS)" \
--license "BSD" \
--description "core vnoded/vcmd and system dependencies" \
--url https://github.com/coreemu/core \
--vendor "$(PACKAGE_VENDOR)" \
-p core_VERSION_ARCH.deb \
-v $(PACKAGE_VERSION) \
--deb-systemd package/core-daemon.service \
--deb-no-default-config-files \
--after-install package/after-install.sh \
--after-remove package/after-remove.sh \
-d "ethtool" \
-d "tk" \
-d "libtk-img" \
-d "procps" \
-d "libc6 >= 2.14" \
-d "bash >= 3.0" \
-d "ebtables" \
-d "iproute2" \
-d "libev4" \
-d "nftables" \
netns/vnoded=/usr/bin/ \
netns/vcmd=/usr/bin/ \
package/etc/core.conf=/etc/core/ \
package/etc/logging.conf=/etc/core/ \
package/examples=/opt/core/ \
daemon/dist/core-$(PACKAGE_VERSION)-py3-none-any.whl=/opt/core/
endef endef
.PHONY: fpm .PHONY: fpm
fpm: clean-local-fpm fpm: clean-local-fpm
cd daemon && poetry build -f wheel $(MAKE) install DESTDIR=$(DESTDIR)
$(call fpm-deb) $(call fpm-deb)
$(call fpm-rpm) $(call fpm-rpm)
.PHONY: fpm-distributed
fpm-distributed: clean-local-fpm
$(MAKE) -C netns install DESTDIR=$(DESTDIR)
$(call fpm-distributed-deb) $(call fpm-distributed-deb)
$(call fpm-distributed-rpm) $(call fpm-distributed-rpm)
@ -176,6 +173,7 @@ $(info creating file $1 from $1.in)
-e 's,[@]CORE_STATE_DIR[@],$(CORE_STATE_DIR),g' \ -e 's,[@]CORE_STATE_DIR[@],$(CORE_STATE_DIR),g' \
-e 's,[@]CORE_DATA_DIR[@],$(CORE_DATA_DIR),g' \ -e 's,[@]CORE_DATA_DIR[@],$(CORE_DATA_DIR),g' \
-e 's,[@]CORE_CONF_DIR[@],$(CORE_CONF_DIR),g' \ -e 's,[@]CORE_CONF_DIR[@],$(CORE_CONF_DIR),g' \
-e 's,[@]CORE_GUI_CONF_DIR[@],$(CORE_GUI_CONF_DIR),g' \
< $1.in > $1 < $1.in > $1
endef endef
@ -183,8 +181,12 @@ all: change-files
.PHONY: change-files .PHONY: change-files
change-files: change-files:
$(call change-files,gui/core-gui)
$(call change-files,scripts/core-daemon.service)
$(call change-files,scripts/core-daemon)
$(call change-files,daemon/core/constants.py) $(call change-files,daemon/core/constants.py)
$(call change-files,netns/setup.py) $(call change-files,netns/setup.py)
$(call change-files,daemon/setup.py)
CORE_DOC_SRC = core-python-$(PACKAGE_VERSION) CORE_DOC_SRC = core-python-$(PACKAGE_VERSION)
.PHONY: doc .PHONY: doc

109
README.md
View file

@ -1,107 +1,24 @@
# Index
- CORE
- Docker Setup
- Precompiled container image
- Build container image from source
- Adding extra packages
- Useful commands
- License
# CORE # CORE
CORE: Common Open Research Emulator CORE: Common Open Research Emulator
Copyright (c)2005-2022 the Boeing Company. Copyright (c)2005-2020 the Boeing Company.
See the LICENSE file included in this distribution. See the LICENSE file included in this distribution.
# Docker Setup ## About
Here you have 2 choices The Common Open Research Emulator (CORE) is a tool for emulating
networks on one or more machines. You can connect these emulated
networks to live networks. CORE consists of a GUI for drawing
topologies of lightweight virtual machines, and Python modules for
scripting network emulation.
## Precompiled container image ## Documentation & Support
```bash We are leveraging GitHub hosted documentation and Discord for persistent
chat rooms. This allows for more dynamic conversations and the
capability to respond faster. Feel free to join us at the link below.
# Start container * [Documentation](https://coreemu.github.io/core/)
sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged --restart unless-stopped git.olympuslab.net/afonso/core-extra:latest * [Discord Channel](https://discord.gg/AKd7kmP)
```
## Build container image from source
```bash
# Clone the repo
git clone https://gitea.olympuslab.net/afonso/core-extra.git
# cd into the directory
cd core-extra
# build the docker image
sudo docker build -t core-extra .
# start container
sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged --restart unless-stopped core-extra
```
### Adding extra packages
To add extra packages you must modify the Dockerfile and then compile the docker image.
If you install it after starting the container it will, by docker nature, be reverted on the next boot of the container.
# Useful commands
I have the following functions on my fish shell
to help me better use core
THIS ONLY WORKS ON FISH, MODIFY FOR BASH OR ZSH
```fish
# RUN CORE GUI
function core
xhost +local:root
sudo docker exec -it core core-gui
end
# RUN BASH INSIDE THE CONTAINER
function core-bash
sudo docker exec -it core /bin/bash
end
# LAUNCH NODE BASH ON THE HOST MACHINE
function launch-term --argument nodename
sudo docker exec -it core xterm -bg black -fg white -fa 'DejaVu Sans Mono' -fs 16 -e vcmd -c /tmp/pycore.1/$nodename -- /bin/bash
end
#TO RUN ANY OTHER COMMAND
sudo docker exec -it core COMAND_GOES_HERE
```
## LICENSE
Copyright (c) 2005-2018, the Boeing Company.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -1,5 +1,9 @@
#!/bin/sh #!/bin/sh
# #
# (c)2010-2012 the Boeing Company
#
# author: Jeff Ahrenholz <jeffrey.m.ahrenholz@boeing.com>
#
# Bootstrap the autoconf system. # Bootstrap the autoconf system.
# #

View file

@ -2,7 +2,7 @@
# Process this file with autoconf to produce a configure script. # Process this file with autoconf to produce a configure script.
# this defines the CORE version number, must be static for AC_INIT # this defines the CORE version number, must be static for AC_INIT
AC_INIT(core, 9.0.3) AC_INIT(core, 6.3.0)
# autoconf and automake initialization # autoconf and automake initialization
AC_CONFIG_SRCDIR([netns/version.h.in]) AC_CONFIG_SRCDIR([netns/version.h.in])
@ -30,14 +30,25 @@ AC_SUBST(CORE_CONF_DIR)
AC_SUBST(CORE_DATA_DIR) AC_SUBST(CORE_DATA_DIR)
AC_SUBST(CORE_STATE_DIR) AC_SUBST(CORE_STATE_DIR)
# documentation option # CORE GUI configuration files and preferences in CORE_GUI_CONF_DIR
# scenario files in ~/.core/configs/
AC_ARG_WITH([guiconfdir],
[AS_HELP_STRING([--with-guiconfdir=dir],
[specify GUI configuration directory])],
[CORE_GUI_CONF_DIR="$with_guiconfdir"],
[CORE_GUI_CONF_DIR="\$\${HOME}/.core"])
AC_SUBST(CORE_GUI_CONF_DIR)
AC_ARG_ENABLE([gui],
[AS_HELP_STRING([--enable-gui[=ARG]],
[build and install the GUI (default is yes)])],
[], [enable_gui=yes])
AC_SUBST(enable_gui)
AC_ARG_ENABLE([docs], AC_ARG_ENABLE([docs],
[AS_HELP_STRING([--enable-docs[=ARG]], [AS_HELP_STRING([--enable-docs[=ARG]],
[build python documentation (default is no)])], [build python documentation (default is no)])],
[], [enable_docs=no]) [], [enable_docs=no])
AC_SUBST(enable_docs) AC_SUBST(enable_docs)
# python option
AC_ARG_ENABLE([python], AC_ARG_ENABLE([python],
[AS_HELP_STRING([--enable-python[=ARG]], [AS_HELP_STRING([--enable-python[=ARG]],
[build and install the python bindings (default is yes)])], [build and install the python bindings (default is yes)])],
@ -83,7 +94,28 @@ if test "x$enable_daemon" = "xyes"; then
want_python=yes want_python=yes
want_linux_netns=yes want_linux_netns=yes
AM_PATH_PYTHON(3.9) # Checks for libraries.
AC_CHECK_LIB([netgraph], [NgMkSockNode])
# Checks for header files.
AC_CHECK_HEADERS([arpa/inet.h fcntl.h limits.h stdint.h stdlib.h string.h sys/ioctl.h sys/mount.h sys/socket.h sys/time.h termios.h unistd.h])
# Checks for typedefs, structures, and compiler characteristics.
AC_C_INLINE
AC_TYPE_INT32_T
AC_TYPE_PID_T
AC_TYPE_SIZE_T
AC_TYPE_SSIZE_T
AC_TYPE_UINT32_T
AC_TYPE_UINT8_T
# Checks for library functions.
AC_FUNC_FORK
AC_FUNC_MALLOC
AC_FUNC_REALLOC
AC_CHECK_FUNCS([atexit dup2 gettimeofday memset socket strerror uname])
AM_PATH_PYTHON(3.6)
AS_IF([$PYTHON -m grpc_tools.protoc -h &> /dev/null], [], [AC_MSG_ERROR([please install python grpcio-tools])]) AS_IF([$PYTHON -m grpc_tools.protoc -h &> /dev/null], [], [AC_MSG_ERROR([please install python grpcio-tools])])
AC_CHECK_PROG(sysctl_path, sysctl, $as_dir, no, $SEARCHPATH) AC_CHECK_PROG(sysctl_path, sysctl, $as_dir, no, $SEARCHPATH)
@ -91,9 +123,9 @@ if test "x$enable_daemon" = "xyes"; then
AC_MSG_ERROR([Could not locate sysctl (from procps package).]) AC_MSG_ERROR([Could not locate sysctl (from procps package).])
fi fi
AC_CHECK_PROG(nftables_path, nft, $as_dir, no, $SEARCHPATH) AC_CHECK_PROG(ebtables_path, ebtables, $as_dir, no, $SEARCHPATH)
if test "x$nftables_path" = "xno" ; then if test "x$ebtables_path" = "xno" ; then
AC_MSG_ERROR([Could not locate nftables (from nftables package).]) AC_MSG_ERROR([Could not locate ebtables (from ebtables package).])
fi fi
AC_CHECK_PROG(ip_path, ip, $as_dir, no, $SEARCHPATH) AC_CHECK_PROG(ip_path, ip, $as_dir, no, $SEARCHPATH)
@ -135,29 +167,22 @@ if test "x$enable_daemon" = "xyes"; then
if test "x$ovs_of_path" = "xno" ; then if test "x$ovs_of_path" = "xno" ; then
AC_MSG_WARN([Could not locate ovs-ofctl cannot use OVS mode]) AC_MSG_WARN([Could not locate ovs-ofctl cannot use OVS mode])
fi fi
CFLAGS_save=$CFLAGS
CPPFLAGS_save=$CPPFLAGS
if test "x$PYTHON_INCLUDE_DIR" = "x"; then
PYTHON_INCLUDE_DIR=`$PYTHON -c "import distutils.sysconfig; print(distutils.sysconfig.get_python_inc())"`
fi
CFLAGS="-I$PYTHON_INCLUDE_DIR"
CPPFLAGS="-I$PYTHON_INCLUDE_DIR"
AC_CHECK_HEADERS([Python.h], [],
AC_MSG_ERROR([Python bindings require Python development headers (try installing your 'python-devel' or 'python-dev' package)]))
CFLAGS=$CFLAGS_save
CPPFLAGS=$CPPFLAGS_save
fi fi
if [ test "x$enable_daemon" = "xyes" || test "x$enable_vnodedonly" = "xyes" ] ; then if [ test "x$enable_daemon" = "xyes" || test "x$enable_vnodedonly" = "xyes" ] ; then
want_linux_netns=yes want_linux_netns=yes
# Checks for header files.
AC_CHECK_HEADERS([arpa/inet.h fcntl.h limits.h stdint.h stdlib.h string.h sys/ioctl.h sys/mount.h sys/socket.h sys/time.h termios.h unistd.h])
# Checks for typedefs, structures, and compiler characteristics.
AC_C_INLINE
AC_TYPE_INT32_T
AC_TYPE_PID_T
AC_TYPE_SIZE_T
AC_TYPE_SSIZE_T
AC_TYPE_UINT32_T
AC_TYPE_UINT8_T
# Checks for library functions.
AC_FUNC_FORK
AC_FUNC_MALLOC
AC_FUNC_REALLOC
AC_CHECK_FUNCS([atexit dup2 gettimeofday memset socket strerror uname])
PKG_CHECK_MODULES(libev, libev, PKG_CHECK_MODULES(libev, libev,
AC_MSG_RESULT([found libev using pkgconfig OK]) AC_MSG_RESULT([found libev using pkgconfig OK])
AC_SUBST(libev_CFLAGS) AC_SUBST(libev_CFLAGS)
@ -195,11 +220,22 @@ if [test "x$want_python" = "xyes" && test "x$enable_docs" = "xyes"] ; then
AS_IF([$PYTHON -c "import sphinx_rtd_theme" &> /dev/null], [], [AC_MSG_ERROR([doc dependency missing, please install python3 -m pip install sphinx-rtd-theme])]) AS_IF([$PYTHON -c "import sphinx_rtd_theme" &> /dev/null], [], [AC_MSG_ERROR([doc dependency missing, please install python3 -m pip install sphinx-rtd-theme])])
fi fi
AC_ARG_WITH([startup],
[AS_HELP_STRING([--with-startup=option],
[option=systemd,suse,none to install systemd/SUSE init scripts])],
[with_startup=$with_startup],
[with_startup=initd])
AC_SUBST(with_startup)
AC_MSG_RESULT([using startup option $with_startup])
# Variable substitutions # Variable substitutions
AM_CONDITIONAL(WANT_GUI, test x$enable_gui = xyes)
AM_CONDITIONAL(WANT_DAEMON, test x$enable_daemon = xyes) AM_CONDITIONAL(WANT_DAEMON, test x$enable_daemon = xyes)
AM_CONDITIONAL(WANT_DOCS, test x$want_docs = xyes) AM_CONDITIONAL(WANT_DOCS, test x$want_docs = xyes)
AM_CONDITIONAL(WANT_PYTHON, test x$want_python = xyes) AM_CONDITIONAL(WANT_PYTHON, test x$want_python = xyes)
AM_CONDITIONAL(WANT_NETNS, test x$want_linux_netns = xyes) AM_CONDITIONAL(WANT_NETNS, test x$want_linux_netns = xyes)
AM_CONDITIONAL(WANT_INITD, test x$with_startup = xinitd)
AM_CONDITIONAL(WANT_SYSTEMD, test x$with_startup = xsystemd)
AM_CONDITIONAL(WANT_VNODEDONLY, test x$enable_vnodedonly = xyes) AM_CONDITIONAL(WANT_VNODEDONLY, test x$enable_vnodedonly = xyes)
if test $cross_compiling = no; then if test $cross_compiling = no; then
@ -210,6 +246,10 @@ fi
# Output files # Output files
AC_CONFIG_FILES([Makefile AC_CONFIG_FILES([Makefile
gui/version.tcl
gui/Makefile
gui/icons/Makefile
scripts/Makefile
man/Makefile man/Makefile
docs/Makefile docs/Makefile
daemon/Makefile daemon/Makefile
@ -231,12 +271,20 @@ Build:
Prefix: ${prefix} Prefix: ${prefix}
Exec Prefix: ${exec_prefix} Exec Prefix: ${exec_prefix}
GUI:
GUI path: ${CORE_LIB_DIR}
GUI config: ${CORE_GUI_CONF_DIR}
Daemon: Daemon:
Daemon path: ${bindir} Daemon path: ${bindir}
Daemon config: ${CORE_CONF_DIR} Daemon config: ${CORE_CONF_DIR}
Python: ${PYTHON} Python: ${PYTHON}
Logs: ${CORE_STATE_DIR}/log
Startup: ${with_startup}
Features to build: Features to build:
Build GUI: ${enable_gui}
Build Daemon: ${enable_daemon} Build Daemon: ${enable_daemon}
Documentation: ${want_docs} Documentation: ${want_docs}

2
daemon/.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
*.pyc
build

View file

@ -5,19 +5,19 @@ repos:
name: isort name: isort
stages: [commit] stages: [commit]
language: system language: system
entry: bash -c 'cd daemon && poetry run isort --atomic -y' entry: bash -c 'cd daemon && pipenv run isort --atomic -y'
types: [python] types: [python]
- id: black - id: black
name: black name: black
stages: [commit] stages: [commit]
language: system language: system
entry: bash -c 'cd daemon && poetry run black .' entry: bash -c 'cd daemon && pipenv run black --exclude ".+_pb2.*.py|doc|build|utm\.py" .'
types: [python] types: [python]
- id: flake8 - id: flake8
name: flake8 name: flake8
stages: [commit] stages: [commit]
language: system language: system
entry: bash -c 'cd daemon && poetry run flake8' entry: bash -c 'cd daemon && pipenv run flake8'
types: [python] types: [python]

2
daemon/MANIFEST.in Normal file
View file

@ -0,0 +1,2 @@
graft core/gui/data
graft core/configservices/*/templates

View file

@ -1,14 +1,49 @@
# CORE # CORE
# (c)2010-2012 the Boeing Company.
# See the LICENSE file included in this distribution.
#
# author: Jeff Ahrenholz <jeffrey.m.ahrenholz@boeing.com>
# #
# Makefile for building netns components. # Makefile for building netns components.
# #
SETUPPY = setup.py
SETUPPYFLAGS = -v
if WANT_DOCS if WANT_DOCS
DOCS = doc DOCS = doc
endif endif
SUBDIRS = proto $(DOCS) SUBDIRS = proto $(DOCS)
SCRIPT_FILES := $(notdir $(wildcard scripts/*))
MAN_FILES := $(notdir $(wildcard ../man/*.1))
# Python package build
noinst_SCRIPTS = build
build:
$(PYTHON) $(SETUPPY) $(SETUPPYFLAGS) build
# Python package install
install-exec-hook:
$(PYTHON) $(SETUPPY) $(SETUPPYFLAGS) install \
--root=/$(DESTDIR) \
--prefix=$(prefix) \
--single-version-externally-managed
# Python package uninstall
uninstall-hook:
rm -rf $(DESTDIR)/etc/core
rm -rf $(DESTDIR)/$(datadir)/core
rm -f $(addprefix $(DESTDIR)/$(datarootdir)/man/man1/, $(MAN_FILES))
rm -f $(addprefix $(DESTDIR)/$(bindir)/,$(SCRIPT_FILES))
rm -rf $(DESTDIR)/$(pythondir)/core-$(PACKAGE_VERSION)-py$(PYTHON_VERSION).egg-info
rm -rf $(DESTDIR)/$(pythondir)/core
# Python package cleanup
clean-local:
-rm -rf build
# because we include entire directories with EXTRA_DIST, we need to clean up # because we include entire directories with EXTRA_DIST, we need to clean up
# the source control files # the source control files
dist-hook: dist-hook:
@ -17,12 +52,17 @@ dist-hook:
distclean-local: distclean-local:
-rm -rf core.egg-info -rm -rf core.egg-info
DISTCLEANFILES = Makefile.in DISTCLEANFILES = Makefile.in
# files to include with distribution tarball # files to include with distribution tarball
EXTRA_DIST = core \ EXTRA_DIST = $(SETUPPY) \
core \
data \
doc/conf.py.in \ doc/conf.py.in \
examples \
scripts \
tests \ tests \
test.py \
setup.cfg \ setup.cfg \
poetry.lock \ requirements.txt
pyproject.toml

23
daemon/Pipfile Normal file
View file

@ -0,0 +1,23 @@
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[scripts]
core = "python scripts/core-daemon -f data/core.conf -l data/logging.conf"
core-pygui = "python scripts/core-pygui"
test = "pytest -v tests"
test-mock = "pytest -v --mock tests"
test-emane = "pytest -v tests/emane"
[dev-packages]
grpcio-tools = "*"
isort = "*"
pre-commit = "*"
flake8 = "*"
black = "==19.3b0"
pytest = "*"
mock = "*"
[packages]
core = {editable = true,path = "."}

732
daemon/Pipfile.lock generated Normal file
View file

@ -0,0 +1,732 @@
{
"_meta": {
"hash": {
"sha256": "199897f713f6f338316b33fcbbe0001e9e55fcd5e5e24b2245a89454ce13321f"
},
"pipfile-spec": 6,
"requires": {},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
}
]
},
"default": {
"bcrypt": {
"hashes": [
"sha256:0258f143f3de96b7c14f762c770f5fc56ccd72f8a1857a451c1cd9a655d9ac89",
"sha256:0b0069c752ec14172c5f78208f1863d7ad6755a6fae6fe76ec2c80d13be41e42",
"sha256:19a4b72a6ae5bb467fea018b825f0a7d917789bcfe893e53f15c92805d187294",
"sha256:5432dd7b34107ae8ed6c10a71b4397f1c853bd39a4d6ffa7e35f40584cffd161",
"sha256:6305557019906466fc42dbc53b46da004e72fd7a551c044a827e572c82191752",
"sha256:69361315039878c0680be456640f8705d76cb4a3a3fe1e057e0f261b74be4b31",
"sha256:6fe49a60b25b584e2f4ef175b29d3a83ba63b3a4df1b4c0605b826668d1b6be5",
"sha256:74a015102e877d0ccd02cdeaa18b32aa7273746914a6c5d0456dd442cb65b99c",
"sha256:763669a367869786bb4c8fcf731f4175775a5b43f070f50f46f0b59da45375d0",
"sha256:8b10acde4e1919d6015e1df86d4c217d3b5b01bb7744c36113ea43d529e1c3de",
"sha256:9fe92406c857409b70a38729dbdf6578caf9228de0aef5bc44f859ffe971a39e",
"sha256:a190f2a5dbbdbff4b74e3103cef44344bc30e61255beb27310e2aec407766052",
"sha256:a595c12c618119255c90deb4b046e1ca3bcfad64667c43d1166f2b04bc72db09",
"sha256:c9457fa5c121e94a58d6505cadca8bed1c64444b83b3204928a866ca2e599105",
"sha256:cb93f6b2ab0f6853550b74e051d297c27a638719753eb9ff66d1e4072be67133",
"sha256:ce4e4f0deb51d38b1611a27f330426154f2980e66582dc5f438aad38b5f24fc1",
"sha256:d7bdc26475679dd073ba0ed2766445bb5b20ca4793ca0db32b399dccc6bc84b7",
"sha256:ff032765bb8716d9387fd5376d987a937254b0619eff0972779515b5c98820bc"
],
"version": "==3.1.7"
},
"cffi": {
"hashes": [
"sha256:001bf3242a1bb04d985d63e138230802c6c8d4db3668fb545fb5005ddf5bb5ff",
"sha256:00789914be39dffba161cfc5be31b55775de5ba2235fe49aa28c148236c4e06b",
"sha256:028a579fc9aed3af38f4892bdcc7390508adabc30c6af4a6e4f611b0c680e6ac",
"sha256:14491a910663bf9f13ddf2bc8f60562d6bc5315c1f09c704937ef17293fb85b0",
"sha256:1cae98a7054b5c9391eb3249b86e0e99ab1e02bb0cc0575da191aedadbdf4384",
"sha256:2089ed025da3919d2e75a4d963d008330c96751127dd6f73c8dc0c65041b4c26",
"sha256:2d384f4a127a15ba701207f7639d94106693b6cd64173d6c8988e2c25f3ac2b6",
"sha256:337d448e5a725bba2d8293c48d9353fc68d0e9e4088d62a9571def317797522b",
"sha256:399aed636c7d3749bbed55bc907c3288cb43c65c4389964ad5ff849b6370603e",
"sha256:3b911c2dbd4f423b4c4fcca138cadde747abdb20d196c4a48708b8a2d32b16dd",
"sha256:3d311bcc4a41408cf5854f06ef2c5cab88f9fded37a3b95936c9879c1640d4c2",
"sha256:62ae9af2d069ea2698bf536dcfe1e4eed9090211dbaafeeedf5cb6c41b352f66",
"sha256:66e41db66b47d0d8672d8ed2708ba91b2f2524ece3dee48b5dfb36be8c2f21dc",
"sha256:675686925a9fb403edba0114db74e741d8181683dcf216be697d208857e04ca8",
"sha256:7e63cbcf2429a8dbfe48dcc2322d5f2220b77b2e17b7ba023d6166d84655da55",
"sha256:8a6c688fefb4e1cd56feb6c511984a6c4f7ec7d2a1ff31a10254f3c817054ae4",
"sha256:8c0ffc886aea5df6a1762d0019e9cb05f825d0eec1f520c51be9d198701daee5",
"sha256:95cd16d3dee553f882540c1ffe331d085c9e629499ceadfbda4d4fde635f4b7d",
"sha256:99f748a7e71ff382613b4e1acc0ac83bf7ad167fb3802e35e90d9763daba4d78",
"sha256:b8c78301cefcf5fd914aad35d3c04c2b21ce8629b5e4f4e45ae6812e461910fa",
"sha256:c420917b188a5582a56d8b93bdd8e0f6eca08c84ff623a4c16e809152cd35793",
"sha256:c43866529f2f06fe0edc6246eb4faa34f03fe88b64a0a9a942561c8e22f4b71f",
"sha256:cab50b8c2250b46fe738c77dbd25ce017d5e6fb35d3407606e7a4180656a5a6a",
"sha256:cef128cb4d5e0b3493f058f10ce32365972c554572ff821e175dbc6f8ff6924f",
"sha256:cf16e3cf6c0a5fdd9bc10c21687e19d29ad1fe863372b5543deaec1039581a30",
"sha256:e56c744aa6ff427a607763346e4170629caf7e48ead6921745986db3692f987f",
"sha256:e577934fc5f8779c554639376beeaa5657d54349096ef24abe8c74c5d9c117c3",
"sha256:f2b0fa0c01d8a0c7483afd9f31d7ecf2d71760ca24499c8697aeb5ca37dc090c"
],
"version": "==1.14.0"
},
"core": {
"editable": true,
"path": "."
},
"cryptography": {
"hashes": [
"sha256:02079a6addc7b5140ba0825f542c0869ff4df9a69c360e339ecead5baefa843c",
"sha256:1df22371fbf2004c6f64e927668734070a8953362cd8370ddd336774d6743595",
"sha256:369d2346db5934345787451504853ad9d342d7f721ae82d098083e1f49a582ad",
"sha256:3cda1f0ed8747339bbdf71b9f38ca74c7b592f24f65cdb3ab3765e4b02871651",
"sha256:44ff04138935882fef7c686878e1c8fd80a723161ad6a98da31e14b7553170c2",
"sha256:4b1030728872c59687badcca1e225a9103440e467c17d6d1730ab3d2d64bfeff",
"sha256:58363dbd966afb4f89b3b11dfb8ff200058fbc3b947507675c19ceb46104b48d",
"sha256:6ec280fb24d27e3d97aa731e16207d58bd8ae94ef6eab97249a2afe4ba643d42",
"sha256:7270a6c29199adc1297776937a05b59720e8a782531f1f122f2eb8467f9aab4d",
"sha256:73fd30c57fa2d0a1d7a49c561c40c2f79c7d6c374cc7750e9ac7c99176f6428e",
"sha256:7f09806ed4fbea8f51585231ba742b58cbcfbfe823ea197d8c89a5e433c7e912",
"sha256:90df0cc93e1f8d2fba8365fb59a858f51a11a394d64dbf3ef844f783844cc793",
"sha256:971221ed40f058f5662a604bd1ae6e4521d84e6cad0b7b170564cc34169c8f13",
"sha256:a518c153a2b5ed6b8cc03f7ae79d5ffad7315ad4569b2d5333a13c38d64bd8d7",
"sha256:b0de590a8b0979649ebeef8bb9f54394d3a41f66c5584fff4220901739b6b2f0",
"sha256:b43f53f29816ba1db8525f006fa6f49292e9b029554b3eb56a189a70f2a40879",
"sha256:d31402aad60ed889c7e57934a03477b572a03af7794fa8fb1780f21ea8f6551f",
"sha256:de96157ec73458a7f14e3d26f17f8128c959084931e8997b9e655a39c8fde9f9",
"sha256:df6b4dca2e11865e6cfbfb708e800efb18370f5a46fd601d3755bc7f85b3a8a2",
"sha256:ecadccc7ba52193963c0475ac9f6fa28ac01e01349a2ca48509667ef41ffd2cf",
"sha256:fb81c17e0ebe3358486cd8cc3ad78adbae58af12fc2bf2bc0bb84e8090fa5ce8"
],
"version": "==2.8"
},
"dataclasses": {
"hashes": [
"sha256:3459118f7ede7c8bea0fe795bff7c6c2ce287d01dd226202f7c9ebc0610a7836",
"sha256:494a6dcae3b8bcf80848eea2ef64c0cc5cd307ffc263e17cdf42f3e5420808e6"
],
"index": "pypi",
"markers": "python_version == '3.6'",
"version": "==0.7"
},
"fabric": {
"hashes": [
"sha256:160331934ea60036604928e792fa8e9f813266b098ef5562aa82b88527740389",
"sha256:24842d7d51556adcabd885ac3cf5e1df73fc622a1708bf3667bf5927576cdfa6"
],
"version": "==2.5.0"
},
"grpcio": {
"hashes": [
"sha256:02aef8ef1a5ac5f0836b543e462eb421df6048a7974211a906148053b8055ea6",
"sha256:07f82aefb4a56c7e1e52b78afb77d446847d27120a838a1a0489260182096045",
"sha256:1cff47297ee614e7ef66243dc34a776883ab6da9ca129ea114a802c5e58af5c1",
"sha256:1ec8fc865d8da6d0713e2092a27eee344cd54628b2c2065a0e77fff94df4ae00",
"sha256:1ef949b15a1f5f30651532a9b54edf3bd7c0b699a10931505fa2c80b2d395942",
"sha256:209927e65395feb449783943d62a3036982f871d7f4045fadb90b2d82b153ea8",
"sha256:25c77692ea8c0929d4ad400ea9c3dcbcc4936cee84e437e0ef80da58fa73d88a",
"sha256:28f27c64dd699b8b10f70da5f9320c1cffcaefca7dd76275b44571bd097f276c",
"sha256:355bd7d7ce5ff2917d217f0e8ddac568cb7403e1ce1639b35a924db7d13a39b6",
"sha256:4a0a33ada3f6f94f855f92460896ef08c798dcc5f17d9364d1735c5adc9d7e4a",
"sha256:4d3b6e66f32528bf43ca2297caca768280a8e068820b1c3dca0fcf9f03c7d6f1",
"sha256:5121fa96c79fc0ec81825091d0be5c16865f834f41b31da40b08ee60552f9961",
"sha256:57949756a3ce1f096fa2b00f812755f5ab2effeccedb19feeb7d0deafa3d1de7",
"sha256:586d931736912865c9790c60ca2db29e8dc4eace160d5a79fec3e58df79a9386",
"sha256:5ae532b93cf9ce5a2a549b74a2c35e3b690b171ece9358519b3039c7b84c887e",
"sha256:5dab393ab96b2ce4012823b2f2ed4ee907150424d2f02b97bd6f8dd8f17cc866",
"sha256:5ebc13451246de82f130e8ee7e723e8d7ae1827f14b7b0218867667b1b12c88d",
"sha256:68a149a0482d0bc697aac702ec6efb9d380e0afebf9484db5b7e634146528371",
"sha256:6db7ded10b82592c472eeeba34b9f12d7b0ab1e2dcad12f081b08ebdea78d7d6",
"sha256:6e545908bcc2ae28e5b190ce3170f92d0438cf26a82b269611390114de0106eb",
"sha256:6f328a3faaf81a2546a3022b3dfc137cc6d50d81082dbc0c94d1678943f05df3",
"sha256:706e2dea3de33b0d8884c4d35ecd5911b4ff04d0697c4138096666ce983671a6",
"sha256:80c3d1ce8820dd819d1c9d6b63b6f445148480a831173b572a9174a55e7abd47",
"sha256:8111b61eee12d7af5c58f82f2c97c2664677a05df9225ef5cbc2f25398c8c454",
"sha256:9713578f187fb1c4d00ac554fe1edcc6b3ddd62f5d4eb578b81261115802df8e",
"sha256:9c0669ba9aebad540fb05a33beb7e659ea6e5ca35833fc5229c20f057db760e8",
"sha256:9e9cfe55dc7ac2aa47e0fd3285ff829685f96803197042c9d2f0fb44e4b39b2c",
"sha256:a22daaf30037b8e59d6968c76fe0f7ff062c976c7a026e92fbefc4c4bf3fc5a4",
"sha256:a25b84e10018875a0f294a7649d07c43e8bc3e6a821714e39e5cd607a36386d7",
"sha256:a71138366d57901597bfcc52af7f076ab61c046f409c7b429011cd68de8f9fe6",
"sha256:b4efde5524579a9ce0459ca35a57a48ca878a4973514b8bb88cb80d7c9d34c85",
"sha256:b78af4d42985ab3143d9882d0006f48d12f1bc4ba88e78f23762777c3ee64571",
"sha256:bb2987eb3af9bcf46019be39b82c120c3d35639a95bc4ee2d08f36ecdf469345",
"sha256:c03ce53690fe492845e14f4ab7e67d5a429a06db99b226b5c7caa23081c1e2bb",
"sha256:c59b9280284b791377b3524c8e39ca7b74ae2881ba1a6c51b36f4f1bb94cee49",
"sha256:d18b4c8cacbb141979bb44355ee5813dd4d307e9d79b3a36d66eca7e0a203df8",
"sha256:d1e5563e3b7f844dbc48d709c9e4a75647e11d0387cc1fa0c861d3e9d34bc844",
"sha256:d22c897b65b1408509099f1c3334bd3704f5e4eb7c0486c57d0e212f71cb8f54",
"sha256:dbec0a3a154dbf2eb85b38abaddf24964fa1c059ee0a4ad55d6f39211b1a4bca",
"sha256:ed123037896a8db6709b8ad5acc0ed435453726ea0b63361d12de369624c2ab5",
"sha256:f3614dabd2cc8741850597b418bcf644d4f60e73615906c3acc407b78ff720b3",
"sha256:f9d632ce9fd485119c968ec6a7a343de698c5e014d17602ae2f110f1b05925ed",
"sha256:fb62996c61eeff56b59ab8abfcaa0859ec2223392c03d6085048b576b567459b"
],
"version": "==1.27.2"
},
"invoke": {
"hashes": [
"sha256:87b3ef9d72a1667e104f89b159eaf8a514dbf2f3576885b2bbdefe74c3fb2132",
"sha256:93e12876d88130c8e0d7fd6618dd5387d6b36da55ad541481dfa5e001656f134",
"sha256:de3f23bfe669e3db1085789fd859eb8ca8e0c5d9c20811e2407fa042e8a5e15d"
],
"version": "==1.4.1"
},
"lxml": {
"hashes": [
"sha256:06d4e0bbb1d62e38ae6118406d7cdb4693a3fa34ee3762238bcb96c9e36a93cd",
"sha256:0701f7965903a1c3f6f09328c1278ac0eee8f56f244e66af79cb224b7ef3801c",
"sha256:1f2c4ec372bf1c4a2c7e4bb20845e8bcf8050365189d86806bad1e3ae473d081",
"sha256:4235bc124fdcf611d02047d7034164897ade13046bda967768836629bc62784f",
"sha256:5828c7f3e615f3975d48f40d4fe66e8a7b25f16b5e5705ffe1d22e43fb1f6261",
"sha256:585c0869f75577ac7a8ff38d08f7aac9033da2c41c11352ebf86a04652758b7a",
"sha256:5d467ce9c5d35b3bcc7172c06320dddb275fea6ac2037f72f0a4d7472035cea9",
"sha256:63dbc21efd7e822c11d5ddbedbbb08cd11a41e0032e382a0fd59b0b08e405a3a",
"sha256:7bc1b221e7867f2e7ff1933165c0cec7153dce93d0cdba6554b42a8beb687bdb",
"sha256:8620ce80f50d023d414183bf90cc2576c2837b88e00bea3f33ad2630133bbb60",
"sha256:8a0ebda56ebca1a83eb2d1ac266649b80af8dd4b4a3502b2c1e09ac2f88fe128",
"sha256:90ed0e36455a81b25b7034038e40880189169c308a3df360861ad74da7b68c1a",
"sha256:95e67224815ef86924fbc2b71a9dbd1f7262384bca4bc4793645794ac4200717",
"sha256:afdb34b715daf814d1abea0317b6d672476b498472f1e5aacbadc34ebbc26e89",
"sha256:b4b2c63cc7963aedd08a5f5a454c9f67251b1ac9e22fd9d72836206c42dc2a72",
"sha256:d068f55bda3c2c3fcaec24bd083d9e2eede32c583faf084d6e4b9daaea77dde8",
"sha256:d5b3c4b7edd2e770375a01139be11307f04341ec709cf724e0f26ebb1eef12c3",
"sha256:deadf4df349d1dcd7b2853a2c8796593cc346600726eff680ed8ed11812382a7",
"sha256:df533af6f88080419c5a604d0d63b2c33b1c0c4409aba7d0cb6de305147ea8c8",
"sha256:e4aa948eb15018a657702fee0b9db47e908491c64d36b4a90f59a64741516e77",
"sha256:e5d842c73e4ef6ed8c1bd77806bf84a7cb535f9c0cf9b2c74d02ebda310070e1",
"sha256:ebec08091a22c2be870890913bdadd86fcd8e9f0f22bcb398abd3af914690c15",
"sha256:edc15fcfd77395e24543be48871c251f38132bb834d9fdfdad756adb6ea37679",
"sha256:f2b74784ed7e0bc2d02bd53e48ad6ba523c9b36c194260b7a5045071abbb1012",
"sha256:fa071559f14bd1e92077b1b5f6c22cf09756c6de7139370249eb372854ce51e6",
"sha256:fd52e796fee7171c4361d441796b64df1acfceb51f29e545e812f16d023c4bbc",
"sha256:fe976a0f1ef09b3638778024ab9fb8cde3118f203364212c198f71341c0715ca"
],
"version": "==4.5.0"
},
"mako": {
"hashes": [
"sha256:3139c5d64aa5d175dbafb95027057128b5fbd05a40c53999f3905ceb53366d9d",
"sha256:8e8b53c71c7e59f3de716b6832c4e401d903af574f6962edbbbf6ecc2a5fe6c9"
],
"version": "==1.1.2"
},
"markupsafe": {
"hashes": [
"sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473",
"sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161",
"sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235",
"sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5",
"sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42",
"sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff",
"sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b",
"sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1",
"sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e",
"sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183",
"sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66",
"sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b",
"sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1",
"sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15",
"sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1",
"sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e",
"sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b",
"sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905",
"sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735",
"sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d",
"sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e",
"sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d",
"sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c",
"sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21",
"sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2",
"sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5",
"sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b",
"sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6",
"sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f",
"sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f",
"sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2",
"sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7",
"sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be"
],
"version": "==1.1.1"
},
"netaddr": {
"hashes": [
"sha256:38aeec7cdd035081d3a4c306394b19d677623bf76fa0913f6695127c7753aefd",
"sha256:56b3558bd71f3f6999e4c52e349f38660e54a7a8a9943335f73dfc96883e08ca"
],
"version": "==0.7.19"
},
"paramiko": {
"hashes": [
"sha256:920492895db8013f6cc0179293147f830b8c7b21fdfc839b6bad760c27459d9f",
"sha256:9c980875fa4d2cb751604664e9a2d0f69096643f5be4db1b99599fe114a97b2f"
],
"version": "==2.7.1"
},
"pillow": {
"hashes": [
"sha256:0a628977ac2e01ca96aaae247ec2bd38e729631ddf2221b4b715446fd45505be",
"sha256:4d9ed9a64095e031435af120d3c910148067087541131e82b3e8db302f4c8946",
"sha256:54ebae163e8412aff0b9df1e88adab65788f5f5b58e625dc5c7f51eaf14a6837",
"sha256:5bfef0b1cdde9f33881c913af14e43db69815c7e8df429ceda4c70a5e529210f",
"sha256:5f3546ceb08089cedb9e8ff7e3f6a7042bb5b37c2a95d392fb027c3e53a2da00",
"sha256:5f7ae9126d16194f114435ebb79cc536b5682002a4fa57fa7bb2cbcde65f2f4d",
"sha256:62a889aeb0a79e50ecf5af272e9e3c164148f4bd9636cc6bcfa182a52c8b0533",
"sha256:7406f5a9b2fd966e79e6abdaf700585a4522e98d6559ce37fc52e5c955fade0a",
"sha256:8453f914f4e5a3d828281a6628cf517832abfa13ff50679a4848926dac7c0358",
"sha256:87269cc6ce1e3dee11f23fa515e4249ae678dbbe2704598a51cee76c52e19cda",
"sha256:875358310ed7abd5320f21dd97351d62de4929b0426cdb1eaa904b64ac36b435",
"sha256:8ac6ce7ff3892e5deaab7abaec763538ffd011f74dc1801d93d3c5fc541feee2",
"sha256:91b710e3353aea6fc758cdb7136d9bbdcb26b53cefe43e2cba953ac3ee1d3313",
"sha256:9d2ba4ed13af381233e2d810ff3bab84ef9f18430a9b336ab69eaf3cd24299ff",
"sha256:a62ec5e13e227399be73303ff301f2865bf68657d15ea50b038d25fc41097317",
"sha256:ab76e5580b0ed647a8d8d2d2daee170e8e9f8aad225ede314f684e297e3643c2",
"sha256:bf4003aa538af3f4205c5fac56eacaa67a6dd81e454ffd9e9f055fff9f1bc614",
"sha256:bf598d2e37cf8edb1a2f26ed3fb255191f5232badea4003c16301cb94ac5bdd0",
"sha256:c18f70dc27cc5d236f10e7834236aff60aadc71346a5bc1f4f83a4b3abee6386",
"sha256:c5ed816632204a2fc9486d784d8e0d0ae754347aba99c811458d69fcdfd2a2f9",
"sha256:dc058b7833184970d1248135b8b0ab702e6daa833be14035179f2acb78ff5636",
"sha256:ff3797f2f16bf9d17d53257612da84dd0758db33935777149b3334c01ff68865"
],
"version": "==7.0.0"
},
"pycparser": {
"hashes": [
"sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0",
"sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"
],
"version": "==2.20"
},
"pynacl": {
"hashes": [
"sha256:05c26f93964373fc0abe332676cb6735f0ecad27711035b9472751faa8521255",
"sha256:0c6100edd16fefd1557da078c7a31e7b7d7a52ce39fdca2bec29d4f7b6e7600c",
"sha256:0d0a8171a68edf51add1e73d2159c4bc19fc0718e79dec51166e940856c2f28e",
"sha256:1c780712b206317a746ace34c209b8c29dbfd841dfbc02aa27f2084dd3db77ae",
"sha256:2424c8b9f41aa65bbdbd7a64e73a7450ebb4aa9ddedc6a081e7afcc4c97f7621",
"sha256:2d23c04e8d709444220557ae48ed01f3f1086439f12dbf11976e849a4926db56",
"sha256:30f36a9c70450c7878053fa1344aca0145fd47d845270b43a7ee9192a051bf39",
"sha256:37aa336a317209f1bb099ad177fef0da45be36a2aa664507c5d72015f956c310",
"sha256:4943decfc5b905748f0756fdd99d4f9498d7064815c4cf3643820c9028b711d1",
"sha256:53126cd91356342dcae7e209f840212a58dcf1177ad52c1d938d428eebc9fee5",
"sha256:57ef38a65056e7800859e5ba9e6091053cd06e1038983016effaffe0efcd594a",
"sha256:5bd61e9b44c543016ce1f6aef48606280e45f892a928ca7068fba30021e9b786",
"sha256:6482d3017a0c0327a49dddc8bd1074cc730d45db2ccb09c3bac1f8f32d1eb61b",
"sha256:7d3ce02c0784b7cbcc771a2da6ea51f87e8716004512493a2b69016326301c3b",
"sha256:a14e499c0f5955dcc3991f785f3f8e2130ed504fa3a7f44009ff458ad6bdd17f",
"sha256:a39f54ccbcd2757d1d63b0ec00a00980c0b382c62865b61a505163943624ab20",
"sha256:aabb0c5232910a20eec8563503c153a8e78bbf5459490c49ab31f6adf3f3a415",
"sha256:bd4ecb473a96ad0f90c20acba4f0bf0df91a4e03a1f4dd6a4bdc9ca75aa3a715",
"sha256:bf459128feb543cfca16a95f8da31e2e65e4c5257d2f3dfa8c0c1031139c9c92",
"sha256:e2da3c13307eac601f3de04887624939aca8ee3c9488a0bb0eca4fb9401fc6b1",
"sha256:f67814c38162f4deb31f68d590771a29d5ae3b1bd64b75cf232308e5c74777e0"
],
"version": "==1.3.0"
},
"pyproj": {
"hashes": [
"sha256:0d8196a5ac75fee2cf71c21066b3344427abfa8ad69b536d3404d5c7c9c0b886",
"sha256:12e378a0a21c73f96177f6cf64520f17e6b7aa02fc9cb27bd5c2d5b06ce170af",
"sha256:17738836128704d8f80b771572d77b8733841f0cb0ca42620549236ea62c4663",
"sha256:1a39175944710b225fd1943cb3b8ea0c8e059d3016360022ca10bbb7a6bfc9ae",
"sha256:2566bffb5395c9fbdb02077a0bc3e3ed0b2e4e3cadf65019e3139a8dfe27dd1d",
"sha256:3f43277f21ddaabed93b9885a4e494b785dca56e31fd37a935519d99b07807f0",
"sha256:424304beca6e0b0bc12aa46fc6d14a481ea47b1a4edec4854bb281656de38948",
"sha256:48128d794c8f52fcff2433a481e3aa2ccb0e0b3ccd51d3ad7cc10cc488c3f547",
"sha256:4a16b650722982cddedd45dfc36435b96e0ba83a2aebd4a4c247e5a68c852442",
"sha256:5161f1b5ece8a5263b64d97a32fbc473a4c6fdca5c95478e58e519ef1e97528e",
"sha256:6839ce14635ebfb01c67e456148f4f1fa04b03ef9645551b89d36593f2a3e57d",
"sha256:80e9f85ab81da75289308f23a62e1426a38411a07b0da738958d65ae8cc6c59c",
"sha256:881b44e94c781d02ecf1d9314fc7f44c09e6d54a8eac281869365999ac4db7a1",
"sha256:977542d2f8cf2981cf3ad72cedfebcd6ac56977c7aa830d9b49fa7888b56e83d",
"sha256:9bba6cbff7e23bb6d9062786d516602681b4414e9e423c138a7360e4d2a193e8",
"sha256:9bf64bba03ddc534ed3c6271ba8f9d31040f40cf8e9e7e458b6b1524a6f59082",
"sha256:9c712ceaa01488ebe6e357e1dfa2434c2304aad8a810e5d4c3d2abe21def6d58",
"sha256:b7da17e5a5c6039f85843e88c2f1ca8606d1a4cc13a87e7b68b9f51a54ef201a",
"sha256:bcdf81b3f13d2cc0354a4c3f7a567b71fcf6fe8098e519aaaee8e61f05c9de10",
"sha256:bebd3f987b7196e9d2ccfe55911b0c76ba9ce309bcabfb629ef205cbaaad37c5",
"sha256:c244e923073cd0bab74ba861ba31724aab90efda35b47a9676603c1a8e80b3ba",
"sha256:dacb94a9d570f4d9fc9369a22d44d7b3071cfe4d57d0ff2f57abd7ef6127fe41"
],
"version": "==2.6.0"
},
"pyyaml": {
"hashes": [
"sha256:06a0d7ba600ce0b2d2fe2e78453a470b5a6e000a985dd4a4e54e436cc36b0e97",
"sha256:240097ff019d7c70a4922b6869d8a86407758333f02203e0fc6ff79c5dcede76",
"sha256:4f4b913ca1a7319b33cfb1369e91e50354d6f07a135f3b901aca02aa95940bd2",
"sha256:69f00dca373f240f842b2931fb2c7e14ddbacd1397d57157a9b005a6a9942648",
"sha256:73f099454b799e05e5ab51423c7bcf361c58d3206fa7b0d555426b1f4d9a3eaf",
"sha256:74809a57b329d6cc0fdccee6318f44b9b8649961fa73144a98735b0aaf029f1f",
"sha256:7739fc0fa8205b3ee8808aea45e968bc90082c10aef6ea95e855e10abf4a37b2",
"sha256:95f71d2af0ff4227885f7a6605c37fd53d3a106fcab511b8860ecca9fcf400ee",
"sha256:b8eac752c5e14d3eca0e6dd9199cd627518cb5ec06add0de9d32baeee6fe645d",
"sha256:cc8955cfbfc7a115fa81d85284ee61147059a753344bc51098f3ccd69b0d7e0c",
"sha256:d13155f591e6fcc1ec3b30685d50bf0711574e2c0dfffd7644babf8b5102ca1a"
],
"version": "==5.3.1"
},
"six": {
"hashes": [
"sha256:236bdbdce46e6e6a3d61a337c0f8b763ca1e8717c03b369e87a7ec7ce1319c0a",
"sha256:8f3cd2e254d8f793e7f3d6d9df77b92252b52637291d0f0da013c76ea2724b6c"
],
"version": "==1.14.0"
}
},
"develop": {
"appdirs": {
"hashes": [
"sha256:9e5896d1372858f8dd3344faf4e5014d21849c756c8d5701f78f8a103b372d92",
"sha256:d8b24664561d0d34ddfaec54636d502d7cea6e29c3eaf68f3df6180863e2166e"
],
"version": "==1.4.3"
},
"attrs": {
"hashes": [
"sha256:08a96c641c3a74e44eb59afb61a24f2cb9f4d7188748e76ba4bb5edfa3cb7d1c",
"sha256:f7b7ce16570fe9965acd6d30101a28f62fb4a7f9e926b3bbc9b61f8b04247e72"
],
"version": "==19.3.0"
},
"black": {
"hashes": [
"sha256:09a9dcb7c46ed496a9850b76e4e825d6049ecd38b611f1224857a79bd985a8cf",
"sha256:68950ffd4d9169716bcb8719a56c07a2f4485354fec061cdd5910aa07369731c"
],
"index": "pypi",
"version": "==19.3b0"
},
"cfgv": {
"hashes": [
"sha256:1ccf53320421aeeb915275a196e23b3b8ae87dea8ac6698b1638001d4a486d53",
"sha256:c8e8f552ffcc6194f4e18dd4f68d9aef0c0d58ae7e7be8c82bee3c5e9edfa513"
],
"version": "==3.1.0"
},
"click": {
"hashes": [
"sha256:8a18b4ea89d8820c5d0c7da8a64b2c324b4dabb695804dbfea19b9be9d88c0cc",
"sha256:e345d143d80bf5ee7534056164e5e112ea5e22716bbb1ce727941f4c8b471b9a"
],
"version": "==7.1.1"
},
"distlib": {
"hashes": [
"sha256:2e166e231a26b36d6dfe35a48c4464346620f8645ed0ace01ee31822b288de21"
],
"version": "==0.3.0"
},
"entrypoints": {
"hashes": [
"sha256:589f874b313739ad35be6e0cd7efde2a4e9b6fea91edcc34e58ecbb8dbe56d19",
"sha256:c70dd71abe5a8c85e55e12c19bd91ccfeec11a6e99044204511f9ed547d48451"
],
"version": "==0.3"
},
"filelock": {
"hashes": [
"sha256:18d82244ee114f543149c66a6e0c14e9c4f8a1044b5cdaadd0f82159d6a6ff59",
"sha256:929b7d63ec5b7d6b71b0fa5ac14e030b3f70b75747cef1b10da9b879fef15836"
],
"version": "==3.0.12"
},
"flake8": {
"hashes": [
"sha256:45681a117ecc81e870cbf1262835ae4af5e7a8b08e40b944a8a6e6b895914cfb",
"sha256:49356e766643ad15072a789a20915d3c91dc89fd313ccd71802303fd67e4deca"
],
"index": "pypi",
"version": "==3.7.9"
},
"grpcio": {
"hashes": [
"sha256:02aef8ef1a5ac5f0836b543e462eb421df6048a7974211a906148053b8055ea6",
"sha256:07f82aefb4a56c7e1e52b78afb77d446847d27120a838a1a0489260182096045",
"sha256:1cff47297ee614e7ef66243dc34a776883ab6da9ca129ea114a802c5e58af5c1",
"sha256:1ec8fc865d8da6d0713e2092a27eee344cd54628b2c2065a0e77fff94df4ae00",
"sha256:1ef949b15a1f5f30651532a9b54edf3bd7c0b699a10931505fa2c80b2d395942",
"sha256:209927e65395feb449783943d62a3036982f871d7f4045fadb90b2d82b153ea8",
"sha256:25c77692ea8c0929d4ad400ea9c3dcbcc4936cee84e437e0ef80da58fa73d88a",
"sha256:28f27c64dd699b8b10f70da5f9320c1cffcaefca7dd76275b44571bd097f276c",
"sha256:355bd7d7ce5ff2917d217f0e8ddac568cb7403e1ce1639b35a924db7d13a39b6",
"sha256:4a0a33ada3f6f94f855f92460896ef08c798dcc5f17d9364d1735c5adc9d7e4a",
"sha256:4d3b6e66f32528bf43ca2297caca768280a8e068820b1c3dca0fcf9f03c7d6f1",
"sha256:5121fa96c79fc0ec81825091d0be5c16865f834f41b31da40b08ee60552f9961",
"sha256:57949756a3ce1f096fa2b00f812755f5ab2effeccedb19feeb7d0deafa3d1de7",
"sha256:586d931736912865c9790c60ca2db29e8dc4eace160d5a79fec3e58df79a9386",
"sha256:5ae532b93cf9ce5a2a549b74a2c35e3b690b171ece9358519b3039c7b84c887e",
"sha256:5dab393ab96b2ce4012823b2f2ed4ee907150424d2f02b97bd6f8dd8f17cc866",
"sha256:5ebc13451246de82f130e8ee7e723e8d7ae1827f14b7b0218867667b1b12c88d",
"sha256:68a149a0482d0bc697aac702ec6efb9d380e0afebf9484db5b7e634146528371",
"sha256:6db7ded10b82592c472eeeba34b9f12d7b0ab1e2dcad12f081b08ebdea78d7d6",
"sha256:6e545908bcc2ae28e5b190ce3170f92d0438cf26a82b269611390114de0106eb",
"sha256:6f328a3faaf81a2546a3022b3dfc137cc6d50d81082dbc0c94d1678943f05df3",
"sha256:706e2dea3de33b0d8884c4d35ecd5911b4ff04d0697c4138096666ce983671a6",
"sha256:80c3d1ce8820dd819d1c9d6b63b6f445148480a831173b572a9174a55e7abd47",
"sha256:8111b61eee12d7af5c58f82f2c97c2664677a05df9225ef5cbc2f25398c8c454",
"sha256:9713578f187fb1c4d00ac554fe1edcc6b3ddd62f5d4eb578b81261115802df8e",
"sha256:9c0669ba9aebad540fb05a33beb7e659ea6e5ca35833fc5229c20f057db760e8",
"sha256:9e9cfe55dc7ac2aa47e0fd3285ff829685f96803197042c9d2f0fb44e4b39b2c",
"sha256:a22daaf30037b8e59d6968c76fe0f7ff062c976c7a026e92fbefc4c4bf3fc5a4",
"sha256:a25b84e10018875a0f294a7649d07c43e8bc3e6a821714e39e5cd607a36386d7",
"sha256:a71138366d57901597bfcc52af7f076ab61c046f409c7b429011cd68de8f9fe6",
"sha256:b4efde5524579a9ce0459ca35a57a48ca878a4973514b8bb88cb80d7c9d34c85",
"sha256:b78af4d42985ab3143d9882d0006f48d12f1bc4ba88e78f23762777c3ee64571",
"sha256:bb2987eb3af9bcf46019be39b82c120c3d35639a95bc4ee2d08f36ecdf469345",
"sha256:c03ce53690fe492845e14f4ab7e67d5a429a06db99b226b5c7caa23081c1e2bb",
"sha256:c59b9280284b791377b3524c8e39ca7b74ae2881ba1a6c51b36f4f1bb94cee49",
"sha256:d18b4c8cacbb141979bb44355ee5813dd4d307e9d79b3a36d66eca7e0a203df8",
"sha256:d1e5563e3b7f844dbc48d709c9e4a75647e11d0387cc1fa0c861d3e9d34bc844",
"sha256:d22c897b65b1408509099f1c3334bd3704f5e4eb7c0486c57d0e212f71cb8f54",
"sha256:dbec0a3a154dbf2eb85b38abaddf24964fa1c059ee0a4ad55d6f39211b1a4bca",
"sha256:ed123037896a8db6709b8ad5acc0ed435453726ea0b63361d12de369624c2ab5",
"sha256:f3614dabd2cc8741850597b418bcf644d4f60e73615906c3acc407b78ff720b3",
"sha256:f9d632ce9fd485119c968ec6a7a343de698c5e014d17602ae2f110f1b05925ed",
"sha256:fb62996c61eeff56b59ab8abfcaa0859ec2223392c03d6085048b576b567459b"
],
"version": "==1.27.2"
},
"grpcio-tools": {
"hashes": [
"sha256:00c5080cfb197ed20ecf0d0ff2d07f1fc9c42c724cad21c40ff2d048de5712b1",
"sha256:069826dd02ce1886444cf4519c4fe1b05ac9ef41491f26e97400640531db47f6",
"sha256:1266b577abe7c720fd16a83d0a4999a192e87c4a98fc9f97e0b99b106b3e155f",
"sha256:16dc3fad04fe18d50777c56af7b2d9b9984cd1cfc71184646eb431196d1645c6",
"sha256:1de5a273eaffeb3d126a63345e9e848ea7db740762f700eb8b5d84c5e3e7687d",
"sha256:2ca280af2cae1a014a238057bd3c0a254527569a6a9169a01c07f0590081d530",
"sha256:43a1573400527a23e4174d88604fde7a9d9a69bf9473c21936b7f409858f8ebb",
"sha256:4698c6b6a57f73b14d91a542c69ff33a2da8729691b7060a5d7f6383624d045e",
"sha256:520b7dafddd0f82cb7e4f6e9c6ba1049aa804d0e207870def9fe7f94d1e14090",
"sha256:57f8b9e2c7f55cd45f6dd930d6de61deb42d3eb7f9788137fbc7155cf724132a",
"sha256:59fbeb5bb9a7b94eb61642ac2cee1db5233b8094ca76fc56d4e0c6c20b5dd85f",
"sha256:5fd7efc2fd3370bd2c72dc58f31a407a5dff5498befa145da211b2e8c6a52c63",
"sha256:6016c07d6566e3109a3c032cf3861902d66501ecc08a5a84c47e43027302f367",
"sha256:627c91923df75091d8c4d244af38d5ab7ed8d786d480751d6c2b9267fbb92fe0",
"sha256:69c4a63919b9007e845d9f8980becd2f89d808a4a431ca32b9723ee37b521cb1",
"sha256:77e25c241e33b75612f2aa62985f746c6f6803ec4e452da508bb7f8d90a69db4",
"sha256:7a2d5fb558ac153a326e742ebfd7020eb781c43d3ffd920abd42b2e6c6fdfb37",
"sha256:7b54b283ec83190680903a9037376dc915e1f03852a2d574ba4d981b7a1fd3d0",
"sha256:845a51305af9fc7f9e2078edaec9a759153195f6cf1fbb12b1fa6f077e56b260",
"sha256:84724458c86ff9b14c29b49e321f34d80445b379f4cd4d0494c694b49b1d6f88",
"sha256:87e8ca2c2d2d3e09b2a2bed5d740d7b3e64028dafb7d6be543b77eec85590736",
"sha256:8e7738a4b93842bca1158cde81a3587c9b7111823e40a1ddf73292ca9d58e08b",
"sha256:915a695bc112517af48126ee0ecdb6aff05ed33f3eeef28f0d076f1f6b52ef5e",
"sha256:99961156a36aae4a402d6b14c1e7efde642794b3ddbf32c51db0cb3a199e8b11",
"sha256:9ba88c2d99bcaf7b9cb720925e3290d73b2367d238c5779363fd5598b2dc98c7",
"sha256:a140bf853edb2b5e8692fe94869e3e34077d7599170c113d07a58286c604f4fe",
"sha256:a14dc7a36c845991d908a7179502ca47bcba5ae1817c4426ce68cf2c97b20ad9",
"sha256:a3d2aec4b09c8e59fee8b0d1ed668d09e8c48b738f03f5d8401d7eb409111c47",
"sha256:a8f892378b0b02526635b806f59141abbb429d19bec56e869e04f396502c9651",
"sha256:aaa5ae26883c3d58d1a4323981f96b941fa09bb8f0f368d97c6225585280cf04",
"sha256:b56caecc16307b088a431a4038c3b3bb7d0e7f9988cbd0e9fa04ac937455ea38",
"sha256:bd7f59ff1252a3db8a143b13ea1c1e93d4b8cf4b852eb48b22ef1e6942f62a84",
"sha256:c1bb8f47d58e9f7c4825abfe01e6b85eda53c8b31d2267ca4cddf3c4d0829b80",
"sha256:d1a5e5fa47ba9557a7d3b31605631805adc66cdba9d95b5d10dfc52cca1fed53",
"sha256:dcbc06556f3713a9348c4fce02d05d91e678fc320fb2bcf0ddf8e4bb11d17867",
"sha256:e17b2e0936b04ced99769e26111e1e86ba81619d1b2691b1364f795e45560953",
"sha256:e6932518db389ede8bf06b4119bbd3e17f42d4626e72dec2b8955b20ec732cb6",
"sha256:ea4b3ad696d976d5eac74ec8df9a2c692113e455446ee38d5b3bd87f8e034fa6",
"sha256:ee50b0cf0d28748ef9f941894eb50fc464bd61b8e96aaf80c5056bea9b80d580",
"sha256:ef624b6134aef737b3daa4fb7e806cb8c5749efecd0b1fa9ce4f7e060c7a0221",
"sha256:f5450aa904e720f9c6407b59e96a8951ed6a95463f49444b6d2594b067d39588",
"sha256:f8514453411d72cc3cf7d481f2b6057e5b7436736d0cd39ee2b2f72088bbf497",
"sha256:fae91f30dc050a8d0b32d20dc700e6092f0bd2138d83e9570fff3f0372c1b27e"
],
"index": "pypi",
"version": "==1.27.2"
},
"identify": {
"hashes": [
"sha256:a7577a1f55cee1d21953a5cf11a3c839ab87f5ef909a4cba6cf52ed72b4c6059",
"sha256:ab246293e6585a1c6361a505b68d5b501a0409310932b7de2c2ead667b564d89"
],
"version": "==1.4.13"
},
"importlib-metadata": {
"hashes": [
"sha256:2a688cbaa90e0cc587f1df48bdc97a6eadccdcd9c35fb3f976a09e3b5016d90f",
"sha256:34513a8a0c4962bc66d35b359558fd8a5e10cd472d37aec5f66858addef32c1e"
],
"markers": "python_version < '3.8'",
"version": "==1.6.0"
},
"importlib-resources": {
"hashes": [
"sha256:4019b6a9082d8ada9def02bece4a76b131518866790d58fdda0b5f8c603b36c2",
"sha256:dd98ceeef3f5ad2ef4cc287b8586da4ebad15877f351e9688987ad663a0a29b8"
],
"markers": "python_version < '3.7'",
"version": "==1.4.0"
},
"isort": {
"hashes": [
"sha256:54da7e92468955c4fceacd0c86bd0ec997b0e1ee80d97f67c35a78b719dccab1",
"sha256:6e811fcb295968434526407adb8796944f1988c5b65e8139058f2014cbe100fd"
],
"index": "pypi",
"version": "==4.3.21"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
],
"version": "==0.6.1"
},
"mock": {
"hashes": [
"sha256:3f9b2c0196c60d21838f307f5825a7b86b678cedc58ab9e50a8988187b4d81e0",
"sha256:dd33eb70232b6118298d516bbcecd26704689c386594f0f3c4f13867b2c56f72"
],
"index": "pypi",
"version": "==4.0.2"
},
"more-itertools": {
"hashes": [
"sha256:5dd8bcf33e5f9513ffa06d5ad33d78f31e1931ac9a18f33d37e77a180d393a7c",
"sha256:b1ddb932186d8a6ac451e1d95844b382f55e12686d51ca0c68b6f61f2ab7a507"
],
"version": "==8.2.0"
},
"nodeenv": {
"hashes": [
"sha256:5b2438f2e42af54ca968dd1b374d14a1194848955187b0e5e4be1f73813a5212"
],
"version": "==1.3.5"
},
"packaging": {
"hashes": [
"sha256:3c292b474fda1671ec57d46d739d072bfd495a4f51ad01a055121d81e952b7a3",
"sha256:82f77b9bee21c1bafbf35a84905d604d5d1223801d639cf3ed140bd651c08752"
],
"version": "==20.3"
},
"pluggy": {
"hashes": [
"sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0",
"sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"
],
"version": "==0.13.1"
},
"pre-commit": {
"hashes": [
"sha256:487c675916e6f99d355ec5595ad77b325689d423ef4839db1ed2f02f639c9522",
"sha256:c0aa11bce04a7b46c5544723aedf4e81a4d5f64ad1205a30a9ea12d5e81969e1"
],
"index": "pypi",
"version": "==2.2.0"
},
"protobuf": {
"hashes": [
"sha256:0bae429443cc4748be2aadfdaf9633297cfaeb24a9a02d0ab15849175ce90fab",
"sha256:24e3b6ad259544d717902777b33966a1a069208c885576254c112663e6a5bb0f",
"sha256:310a7aca6e7f257510d0c750364774034272538d51796ca31d42c3925d12a52a",
"sha256:52e586072612c1eec18e1174f8e3bb19d08f075fc2e3f91d3b16c919078469d0",
"sha256:73152776dc75f335c476d11d52ec6f0f6925774802cd48d6189f4d5d7fe753f4",
"sha256:7774bbbaac81d3ba86de646c39f154afc8156717972bf0450c9dbfa1dc8dbea2",
"sha256:82d7ac987715d8d1eb4068bf997f3053468e0ce0287e2729c30601feb6602fee",
"sha256:8eb9c93798b904f141d9de36a0ba9f9b73cc382869e67c9e642c0aba53b0fc07",
"sha256:adf0e4d57b33881d0c63bb11e7f9038f98ee0c3e334c221f0858f826e8fb0151",
"sha256:c40973a0aee65422d8cb4e7d7cbded95dfeee0199caab54d5ab25b63bce8135a",
"sha256:c77c974d1dadf246d789f6dad1c24426137c9091e930dbf50e0a29c1fcf00b1f",
"sha256:dd9aa4401c36785ea1b6fff0552c674bdd1b641319cb07ed1fe2392388e9b0d7",
"sha256:e11df1ac6905e81b815ab6fd518e79be0a58b5dc427a2cf7208980f30694b956",
"sha256:e2f8a75261c26b2f5f3442b0525d50fd79a71aeca04b5ec270fc123536188306",
"sha256:e512b7f3a4dd780f59f1bf22c302740e27b10b5c97e858a6061772668cd6f961",
"sha256:ef2c2e56aaf9ee914d3dccc3408d42661aaf7d9bb78eaa8f17b2e6282f214481",
"sha256:fac513a9dc2a74b99abd2e17109b53945e364649ca03d9f7a0b96aa8d1807d0a",
"sha256:fdfb6ad138dbbf92b5dbea3576d7c8ba7463173f7d2cb0ca1bd336ec88ddbd80"
],
"version": "==3.11.3"
},
"py": {
"hashes": [
"sha256:5e27081401262157467ad6e7f851b7aa402c5852dbcb3dae06768434de5752aa",
"sha256:c20fdd83a5dbc0af9efd622bee9a5564e278f6380fffcacc43ba6f43db2813b0"
],
"version": "==1.8.1"
},
"pycodestyle": {
"hashes": [
"sha256:95a2219d12372f05704562a14ec30bc76b05a5b297b21a5dfe3f6fac3491ae56",
"sha256:e40a936c9a450ad81df37f549d676d127b1b66000a6c500caa2b085bc0ca976c"
],
"version": "==2.5.0"
},
"pyflakes": {
"hashes": [
"sha256:17dbeb2e3f4d772725c777fabc446d5634d1038f234e77343108ce445ea69ce0",
"sha256:d976835886f8c5b31d47970ed689944a0262b5f3afa00a5a7b4dc81e5449f8a2"
],
"version": "==2.1.1"
},
"pyparsing": {
"hashes": [
"sha256:4c830582a84fb022400b85429791bc551f1f4871c33f23e44f353119e92f969f",
"sha256:c342dccb5250c08d45fd6f8b4a559613ca603b57498511740e65cd11a2e7dcec"
],
"version": "==2.4.6"
},
"pytest": {
"hashes": [
"sha256:0e5b30f5cb04e887b91b1ee519fa3d89049595f428c1db76e73bd7f17b09b172",
"sha256:84dde37075b8805f3d1f392cc47e38a0e59518fb46a431cfdaf7cf1ce805f970"
],
"index": "pypi",
"version": "==5.4.1"
},
"pyyaml": {
"hashes": [
"sha256:06a0d7ba600ce0b2d2fe2e78453a470b5a6e000a985dd4a4e54e436cc36b0e97",
"sha256:240097ff019d7c70a4922b6869d8a86407758333f02203e0fc6ff79c5dcede76",
"sha256:4f4b913ca1a7319b33cfb1369e91e50354d6f07a135f3b901aca02aa95940bd2",
"sha256:69f00dca373f240f842b2931fb2c7e14ddbacd1397d57157a9b005a6a9942648",
"sha256:73f099454b799e05e5ab51423c7bcf361c58d3206fa7b0d555426b1f4d9a3eaf",
"sha256:74809a57b329d6cc0fdccee6318f44b9b8649961fa73144a98735b0aaf029f1f",
"sha256:7739fc0fa8205b3ee8808aea45e968bc90082c10aef6ea95e855e10abf4a37b2",
"sha256:95f71d2af0ff4227885f7a6605c37fd53d3a106fcab511b8860ecca9fcf400ee",
"sha256:b8eac752c5e14d3eca0e6dd9199cd627518cb5ec06add0de9d32baeee6fe645d",
"sha256:cc8955cfbfc7a115fa81d85284ee61147059a753344bc51098f3ccd69b0d7e0c",
"sha256:d13155f591e6fcc1ec3b30685d50bf0711574e2c0dfffd7644babf8b5102ca1a"
],
"version": "==5.3.1"
},
"six": {
"hashes": [
"sha256:236bdbdce46e6e6a3d61a337c0f8b763ca1e8717c03b369e87a7ec7ce1319c0a",
"sha256:8f3cd2e254d8f793e7f3d6d9df77b92252b52637291d0f0da013c76ea2724b6c"
],
"version": "==1.14.0"
},
"toml": {
"hashes": [
"sha256:229f81c57791a41d65e399fc06bf0848bab550a9dfd5ed66df18ce5f05e73d5c",
"sha256:235682dd292d5899d361a811df37e04a8828a5b1da3115886b73cf81ebc9100e"
],
"version": "==0.10.0"
},
"virtualenv": {
"hashes": [
"sha256:4e399f48c6b71228bf79f5febd27e3bbb753d9d5905776a86667bc61ab628a25",
"sha256:9e81279f4a9d16d1c0654a127c2c86e5bca2073585341691882c1e66e31ef8a5"
],
"version": "==20.0.15"
},
"wcwidth": {
"hashes": [
"sha256:cafe2186b3c009a04067022ce1dcd79cb38d8d65ee4f4791b8888d6599d1bbe1",
"sha256:ee73862862a156bf77ff92b09034fc4825dd3af9cf81bc5b360668d425f3c5f1"
],
"version": "==0.1.9"
},
"zipp": {
"hashes": [
"sha256:aa36550ff0c0b7ef7fa639055d797116ee891440eac1a56f378e2d3179e0320b",
"sha256:c599e4d75c98f6798c509911d08a22e6c021d074469042177c8c86fb92eefd96"
],
"markers": "python_version < '3.8'",
"version": "==3.1.0"
}
}
}

View file

@ -2,3 +2,6 @@ import logging.config
# setup default null handler # setup default null handler
logging.getLogger(__name__).addHandler(logging.NullHandler()) logging.getLogger(__name__).addHandler(logging.NullHandler())
# disable paramiko logging
logging.getLogger("paramiko").setLevel(logging.WARNING)

File diff suppressed because it is too large Load diff

View file

@ -1,10 +1,9 @@
import logging import logging
from collections.abc import Iterable
from queue import Empty, Queue from queue import Empty, Queue
from typing import Optional from typing import Iterable
from core.api.grpc import core_pb2, grpcutils from core.api.grpc import core_pb2
from core.api.grpc.grpcutils import convert_link_data from core.api.grpc.grpcutils import convert_link
from core.emulator.data import ( from core.emulator.data import (
ConfigData, ConfigData,
EventData, EventData,
@ -15,121 +14,114 @@ from core.emulator.data import (
) )
from core.emulator.session import Session from core.emulator.session import Session
logger = logging.getLogger(__name__)
def handle_node_event(event: NodeData) -> core_pb2.NodeEvent:
def handle_node_event(session: Session, node_data: NodeData) -> core_pb2.Event:
""" """
Handle node event when there is a node event Handle node event when there is a node event
:param session: session node is from :param event: node data
:param node_data: node data
:return: node event that contains node id, name, model, position, and services :return: node event that contains node id, name, model, position, and services
""" """
node = node_data.node position = core_pb2.Position(x=event.x_position, y=event.y_position)
emane_configs = grpcutils.get_emane_model_configs_dict(session) node_proto = core_pb2.Node(
node_emane_configs = emane_configs.get(node.id, []) id=event.id,
node_proto = grpcutils.get_node_proto(session, node, node_emane_configs) name=event.name,
message_type = node_data.message_type.value model=event.model,
node_event = core_pb2.NodeEvent(message_type=message_type, node=node_proto) position=position,
return core_pb2.Event(node_event=node_event, source=node_data.source) services=event.services,
)
return core_pb2.NodeEvent(node=node_proto, source=event.source)
def handle_link_event(link_data: LinkData) -> core_pb2.Event: def handle_link_event(event: LinkData) -> core_pb2.LinkEvent:
""" """
Handle link event when there is a link event Handle link event when there is a link event
:param link_data: link data :param event: link data
:return: link event that has message type and link information :return: link event that has message type and link information
""" """
link = convert_link_data(link_data) link = convert_link(event)
message_type = link_data.message_type.value return core_pb2.LinkEvent(message_type=event.message_type.value, link=link)
link_event = core_pb2.LinkEvent(message_type=message_type, link=link)
return core_pb2.Event(link_event=link_event, source=link_data.source)
def handle_session_event(event_data: EventData) -> core_pb2.Event: def handle_session_event(event: EventData) -> core_pb2.SessionEvent:
""" """
Handle session event when there is a session event Handle session event when there is a session event
:param event_data: event data :param event: event data
:return: session event :return: session event
""" """
event_time = event_data.time event_time = event.time
if event_time is not None: if event_time is not None:
event_time = float(event_time) event_time = float(event_time)
session_event = core_pb2.SessionEvent( return core_pb2.SessionEvent(
node_id=event_data.node, node_id=event.node,
event=event_data.event_type.value, event=event.event_type.value,
name=event_data.name, name=event.name,
data=event_data.data, data=event.data,
time=event_time, time=event_time,
) )
return core_pb2.Event(session_event=session_event)
def handle_config_event(config_data: ConfigData) -> core_pb2.Event: def handle_config_event(event: ConfigData) -> core_pb2.ConfigEvent:
""" """
Handle configuration event when there is configuration event Handle configuration event when there is configuration event
:param config_data: configuration data :param event: configuration data
:return: configuration event :return: configuration event
""" """
config_event = core_pb2.ConfigEvent( return core_pb2.ConfigEvent(
message_type=config_data.message_type, message_type=event.message_type,
node_id=config_data.node, node_id=event.node,
object=config_data.object, object=event.object,
type=config_data.type, type=event.type,
captions=config_data.captions, captions=event.captions,
bitmap=config_data.bitmap, bitmap=event.bitmap,
data_values=config_data.data_values, data_values=event.data_values,
possible_values=config_data.possible_values, possible_values=event.possible_values,
groups=config_data.groups, groups=event.groups,
iface_id=config_data.iface_id, interface=event.interface_number,
network_id=config_data.network_id, network_id=event.network_id,
opaque=config_data.opaque, opaque=event.opaque,
data_types=config_data.data_types, data_types=event.data_types,
) )
return core_pb2.Event(config_event=config_event)
def handle_exception_event(exception_data: ExceptionData) -> core_pb2.Event: def handle_exception_event(event: ExceptionData) -> core_pb2.ExceptionEvent:
""" """
Handle exception event when there is exception event Handle exception event when there is exception event
:param exception_data: exception data :param event: exception data
:return: exception event :return: exception event
""" """
exception_event = core_pb2.ExceptionEvent( return core_pb2.ExceptionEvent(
node_id=exception_data.node, node_id=event.node,
level=exception_data.level.value, level=event.level.value,
source=exception_data.source, source=event.source,
date=exception_data.date, date=event.date,
text=exception_data.text, text=event.text,
opaque=exception_data.opaque, opaque=event.opaque,
) )
return core_pb2.Event(exception_event=exception_event)
def handle_file_event(file_data: FileData) -> core_pb2.Event: def handle_file_event(event: FileData) -> core_pb2.FileEvent:
""" """
Handle file event Handle file event
:param file_data: file data :param event: file data
:return: file event :return: file event
""" """
file_event = core_pb2.FileEvent( return core_pb2.FileEvent(
message_type=file_data.message_type.value, message_type=event.message_type.value,
node_id=file_data.node, node_id=event.node,
name=file_data.name, name=event.name,
mode=file_data.mode, mode=event.mode,
number=file_data.number, number=event.number,
type=file_data.type, type=event.type,
source=file_data.source, source=event.source,
data=file_data.data, data=event.data,
compressed_data=file_data.compressed_data, compressed_data=event.compressed_data,
) )
return core_pb2.Event(file_event=file_event)
class EventStreamer: class EventStreamer:
@ -146,9 +138,9 @@ class EventStreamer:
:param session: session to process events for :param session: session to process events for
:param event_types: types of events to process :param event_types: types of events to process
""" """
self.session: Session = session self.session = session
self.event_types: Iterable[core_pb2.EventType] = event_types self.event_types = event_types
self.queue: Queue = Queue() self.queue = Queue()
self.add_handlers() self.add_handlers()
def add_handlers(self) -> None: def add_handlers(self) -> None:
@ -170,33 +162,32 @@ class EventStreamer:
if core_pb2.EventType.SESSION in self.event_types: if core_pb2.EventType.SESSION in self.event_types:
self.session.event_handlers.append(self.queue.put) self.session.event_handlers.append(self.queue.put)
def process(self) -> Optional[core_pb2.Event]: def process(self) -> core_pb2.Event:
""" """
Process the next event in the queue. Process the next event in the queue.
:return: grpc event, or None when invalid event or queue timeout :return: grpc event, or None when invalid event or queue timeout
""" """
event = None event = core_pb2.Event(session_id=self.session.id)
try: try:
data = self.queue.get(timeout=1) data = self.queue.get(timeout=1)
if isinstance(data, NodeData): if isinstance(data, NodeData):
event = handle_node_event(self.session, data) event.node_event.CopyFrom(handle_node_event(data))
elif isinstance(data, LinkData): elif isinstance(data, LinkData):
event = handle_link_event(data) event.link_event.CopyFrom(handle_link_event(data))
elif isinstance(data, EventData): elif isinstance(data, EventData):
event = handle_session_event(data) event.session_event.CopyFrom(handle_session_event(data))
elif isinstance(data, ConfigData): elif isinstance(data, ConfigData):
event = handle_config_event(data) event.config_event.CopyFrom(handle_config_event(data))
elif isinstance(data, ExceptionData): elif isinstance(data, ExceptionData):
event = handle_exception_event(data) event.exception_event.CopyFrom(handle_exception_event(data))
elif isinstance(data, FileData): elif isinstance(data, FileData):
event = handle_file_event(data) event.file_event.CopyFrom(handle_file_event(data))
else: else:
logger.error("unknown event: %s", data) logging.error("unknown event: %s", data)
event = None
except Empty: except Empty:
pass event = None
if event:
event.session_id = self.session.id
return event return event
def remove_handlers(self) -> None: def remove_handlers(self) -> None:

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,60 @@
"""
Defines core server for handling TCP connections.
"""
import socketserver
from core.emulator.coreemu import CoreEmu
class CoreServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
"""
TCP server class, manages sessions and spawns request handlers for
incoming connections.
"""
daemon_threads = True
allow_reuse_address = True
def __init__(self, server_address, handler_class, config=None):
"""
Server class initialization takes configuration data and calls
the socketserver constructor.
:param tuple[str, int] server_address: server host and port to use
:param class handler_class: request handler
:param dict config: configuration setting
"""
self.coreemu = CoreEmu(config)
self.config = config
socketserver.TCPServer.__init__(self, server_address, handler_class)
class CoreUdpServer(socketserver.ThreadingMixIn, socketserver.UDPServer):
"""
UDP server class, manages sessions and spawns request handlers for
incoming connections.
"""
daemon_threads = True
allow_reuse_address = True
def __init__(self, server_address, handler_class, mainserver):
"""
Server class initialization takes configuration data and calls
the SocketServer constructor
:param server_address:
:param class handler_class: request handler
:param mainserver:
"""
self.mainserver = mainserver
socketserver.UDPServer.__init__(self, server_address, handler_class)
def start(self):
"""
Thread target to run concurrently with the TCP server.
:return: nothing
"""
self.serve_forever()

View file

@ -0,0 +1,182 @@
"""
Converts CORE data objects into legacy API messages.
"""
import logging
from collections import OrderedDict
from typing import Dict, List
from core.api.tlv import coreapi, structutils
from core.api.tlv.enumerations import ConfigTlvs, NodeTlvs
from core.config import ConfigGroup, ConfigurableOptions
from core.emulator.data import ConfigData
def convert_node(node_data):
"""
Convenience method for converting NodeData to a packed TLV message.
:param core.emulator.data.NodeData node_data: node data to convert
:return: packed node message
"""
session = None
if node_data.session is not None:
session = str(node_data.session)
services = None
if node_data.services is not None:
services = "|".join([x for x in node_data.services])
tlv_data = structutils.pack_values(
coreapi.CoreNodeTlv,
[
(NodeTlvs.NUMBER, node_data.id),
(NodeTlvs.TYPE, node_data.node_type.value),
(NodeTlvs.NAME, node_data.name),
(NodeTlvs.IP_ADDRESS, node_data.ip_address),
(NodeTlvs.MAC_ADDRESS, node_data.mac_address),
(NodeTlvs.IP6_ADDRESS, node_data.ip6_address),
(NodeTlvs.MODEL, node_data.model),
(NodeTlvs.EMULATION_ID, node_data.emulation_id),
(NodeTlvs.EMULATION_SERVER, node_data.server),
(NodeTlvs.SESSION, session),
(NodeTlvs.X_POSITION, int(node_data.x_position)),
(NodeTlvs.Y_POSITION, int(node_data.y_position)),
(NodeTlvs.CANVAS, node_data.canvas),
(NodeTlvs.NETWORK_ID, node_data.network_id),
(NodeTlvs.SERVICES, services),
(NodeTlvs.LATITUDE, str(node_data.latitude)),
(NodeTlvs.LONGITUDE, str(node_data.longitude)),
(NodeTlvs.ALTITUDE, str(node_data.altitude)),
(NodeTlvs.ICON, node_data.icon),
(NodeTlvs.OPAQUE, node_data.opaque),
],
)
return coreapi.CoreNodeMessage.pack(node_data.message_type.value, tlv_data)
def convert_config(config_data):
"""
Convenience method for converting ConfigData to a packed TLV message.
:param core.emulator.data.ConfigData config_data: config data to convert
:return: packed message
"""
session = None
if config_data.session is not None:
session = str(config_data.session)
tlv_data = structutils.pack_values(
coreapi.CoreConfigTlv,
[
(ConfigTlvs.NODE, config_data.node),
(ConfigTlvs.OBJECT, config_data.object),
(ConfigTlvs.TYPE, config_data.type),
(ConfigTlvs.DATA_TYPES, config_data.data_types),
(ConfigTlvs.VALUES, config_data.data_values),
(ConfigTlvs.CAPTIONS, config_data.captions),
(ConfigTlvs.BITMAP, config_data.bitmap),
(ConfigTlvs.POSSIBLE_VALUES, config_data.possible_values),
(ConfigTlvs.GROUPS, config_data.groups),
(ConfigTlvs.SESSION, session),
(ConfigTlvs.INTERFACE_NUMBER, config_data.interface_number),
(ConfigTlvs.NETWORK_ID, config_data.network_id),
(ConfigTlvs.OPAQUE, config_data.opaque),
],
)
return coreapi.CoreConfMessage.pack(config_data.message_type, tlv_data)
class ConfigShim:
"""
Provides helper methods for converting newer configuration values into TLV
compatible formats.
"""
@classmethod
def str_to_dict(cls, key_values: str) -> Dict[str, str]:
"""
Converts a TLV key/value string into an ordered mapping.
:param key_values:
:return: ordered mapping of key/value pairs
"""
key_values = key_values.split("|")
values = OrderedDict()
for key_value in key_values:
key, value = key_value.split("=", 1)
values[key] = value
return values
@classmethod
def groups_to_str(cls, config_groups: List[ConfigGroup]) -> str:
"""
Converts configuration groups to a TLV formatted string.
:param config_groups: configuration groups to format
:return: TLV configuration group string
"""
group_strings = []
for config_group in config_groups:
group_string = (
f"{config_group.name}:{config_group.start}-{config_group.stop}"
)
group_strings.append(group_string)
return "|".join(group_strings)
@classmethod
def config_data(
cls,
flags: int,
node_id: int,
type_flags: int,
configurable_options: ConfigurableOptions,
config: Dict[str, str],
) -> ConfigData:
"""
Convert this class to a Config API message. Some TLVs are defined
by the class, but node number, conf type flags, and values must
be passed in.
:param flags: message flags
:param node_id: node id
:param type_flags: type flags
:param configurable_options: options to create config data for
:param config: configuration values for options
:return: configuration data object
"""
key_values = None
captions = None
data_types = []
possible_values = []
logging.debug("configurable: %s", configurable_options)
logging.debug("configuration options: %s", configurable_options.configurations)
logging.debug("configuration data: %s", config)
for configuration in configurable_options.configurations():
if not captions:
captions = configuration.label
else:
captions += f"|{configuration.label}"
data_types.append(configuration.type.value)
options = ",".join(configuration.options)
possible_values.append(options)
_id = configuration.id
config_value = config.get(_id, configuration.default)
key_value = f"{_id}={config_value}"
if not key_values:
key_values = key_value
else:
key_values += f"|{key_value}"
groups_str = cls.groups_to_str(configurable_options.config_groups())
return ConfigData(
message_type=flags,
node=node_id,
object=configurable_options.name,
type=type_flags,
data_types=tuple(data_types),
data_values=key_values,
captions=captions,
possible_values="|".join(possible_values),
bitmap=configurable_options.bitmap,
groups=groups_str,
)

View file

@ -0,0 +1,212 @@
"""
Enumerations specific to the CORE TLV API.
"""
from enum import Enum
CORE_API_PORT = 4038
class MessageTypes(Enum):
"""
CORE message types.
"""
NODE = 0x01
LINK = 0x02
EXECUTE = 0x03
REGISTER = 0x04
CONFIG = 0x05
FILE = 0x06
INTERFACE = 0x07
EVENT = 0x08
SESSION = 0x09
EXCEPTION = 0x0A
class NodeTlvs(Enum):
"""
Node type, length, value enumerations.
"""
NUMBER = 0x01
TYPE = 0x02
NAME = 0x03
IP_ADDRESS = 0x04
MAC_ADDRESS = 0x05
IP6_ADDRESS = 0x06
MODEL = 0x07
EMULATION_SERVER = 0x08
SESSION = 0x0A
X_POSITION = 0x20
Y_POSITION = 0x21
CANVAS = 0x22
EMULATION_ID = 0x23
NETWORK_ID = 0x24
SERVICES = 0x25
LATITUDE = 0x30
LONGITUDE = 0x31
ALTITUDE = 0x32
ICON = 0x42
OPAQUE = 0x50
class LinkTlvs(Enum):
"""
Link type, length, value enumerations.
"""
N1_NUMBER = 0x01
N2_NUMBER = 0x02
DELAY = 0x03
BANDWIDTH = 0x04
PER = 0x05
DUP = 0x06
JITTER = 0x07
MER = 0x08
BURST = 0x09
SESSION = 0x0A
MBURST = 0x10
TYPE = 0x20
GUI_ATTRIBUTES = 0x21
UNIDIRECTIONAL = 0x22
EMULATION_ID = 0x23
NETWORK_ID = 0x24
KEY = 0x25
INTERFACE1_NUMBER = 0x30
INTERFACE1_IP4 = 0x31
INTERFACE1_IP4_MASK = 0x32
INTERFACE1_MAC = 0x33
INTERFACE1_IP6 = 0x34
INTERFACE1_IP6_MASK = 0x35
INTERFACE2_NUMBER = 0x36
INTERFACE2_IP4 = 0x37
INTERFACE2_IP4_MASK = 0x38
INTERFACE2_MAC = 0x39
INTERFACE2_IP6 = 0x40
INTERFACE2_IP6_MASK = 0x41
INTERFACE1_NAME = 0x42
INTERFACE2_NAME = 0x43
OPAQUE = 0x50
class ExecuteTlvs(Enum):
"""
Execute type, length, value enumerations.
"""
NODE = 0x01
NUMBER = 0x02
TIME = 0x03
COMMAND = 0x04
RESULT = 0x05
STATUS = 0x06
SESSION = 0x0A
class ConfigTlvs(Enum):
"""
Configuration type, length, value enumerations.
"""
NODE = 0x01
OBJECT = 0x02
TYPE = 0x03
DATA_TYPES = 0x04
VALUES = 0x05
CAPTIONS = 0x06
BITMAP = 0x07
POSSIBLE_VALUES = 0x08
GROUPS = 0x09
SESSION = 0x0A
INTERFACE_NUMBER = 0x0B
NETWORK_ID = 0x24
OPAQUE = 0x50
class ConfigFlags(Enum):
"""
Configuration flags.
"""
NONE = 0x00
REQUEST = 0x01
UPDATE = 0x02
RESET = 0x03
class FileTlvs(Enum):
"""
File type, length, value enumerations.
"""
NODE = 0x01
NAME = 0x02
MODE = 0x03
NUMBER = 0x04
TYPE = 0x05
SOURCE_NAME = 0x06
SESSION = 0x0A
DATA = 0x10
COMPRESSED_DATA = 0x11
class InterfaceTlvs(Enum):
"""
Interface type, length, value enumerations.
"""
NODE = 0x01
NUMBER = 0x02
NAME = 0x03
IP_ADDRESS = 0x04
MASK = 0x05
MAC_ADDRESS = 0x06
IP6_ADDRESS = 0x07
IP6_MASK = 0x08
TYPE = 0x09
SESSION = 0x0A
STATE = 0x0B
EMULATION_ID = 0x23
NETWORK_ID = 0x24
class EventTlvs(Enum):
"""
Event type, length, value enumerations.
"""
NODE = 0x01
TYPE = 0x02
NAME = 0x03
DATA = 0x04
TIME = 0x05
SESSION = 0x0A
class SessionTlvs(Enum):
"""
Session type, length, value enumerations.
"""
NUMBER = 0x01
NAME = 0x02
FILE = 0x03
NODE_COUNT = 0x04
DATE = 0x05
THUMB = 0x06
USER = 0x07
OPAQUE = 0x0A
class ExceptionTlvs(Enum):
"""
Exception type, length, value enumerations.
"""
NODE = 0x01
SESSION = 0x02
LEVEL = 0x03
SOURCE = 0x04
DATE = 0x05
TEXT = 0x06
OPAQUE = 0x0A

View file

@ -0,0 +1,43 @@
"""
Utilities for working with python struct data.
"""
import logging
def pack_values(clazz, packers):
"""
Pack values for a given legacy class.
:param class clazz: class that will provide a pack method
:param list packers: a list of tuples that are used to pack values and transform them
:return: packed data string of all values
"""
# iterate through tuples of values to pack
logging.debug("packing: %s", packers)
data = b""
for packer in packers:
# check if a transformer was provided for valid values
transformer = None
if len(packer) == 2:
tlv_type, value = packer
elif len(packer) == 3:
tlv_type, value, transformer = packer
else:
raise RuntimeError("packer had more than 3 arguments")
# only pack actual values and avoid packing empty strings
# protobuf defaults to empty strings and does no imply a value to set
if value is None or (isinstance(value, str) and not value):
continue
# transform values as needed
if transformer:
value = transformer(value)
# pack and add to existing data
logging.debug("packing: %s - %s type(%s)", tlv_type, value, type(value))
data += clazz.pack(tlv_type.value, value)
return data

View file

@ -4,112 +4,70 @@ Common support for configurable CORE objects.
import logging import logging
from collections import OrderedDict from collections import OrderedDict
from dataclasses import dataclass, field from typing import TYPE_CHECKING, Dict, List, Tuple, Type, Union
from typing import TYPE_CHECKING, Any, Optional, Union
from core.emane.nodes import EmaneNet from core.emane.nodes import EmaneNet
from core.emulator.enumerations import ConfigDataTypes from core.emulator.enumerations import ConfigDataTypes
from core.errors import CoreConfigError
from core.nodes.network import WlanNode from core.nodes.network import WlanNode
logger = logging.getLogger(__name__)
if TYPE_CHECKING: if TYPE_CHECKING:
from core.location.mobility import WirelessModel from core.location.mobility import WirelessModel
WirelessModelType = type[WirelessModel] WirelessModelType = Type[WirelessModel]
_BOOL_OPTIONS: set[str] = {"0", "1"}
@dataclass
class ConfigGroup: class ConfigGroup:
""" """
Defines configuration group tabs used for display by ConfigurationOptions. Defines configuration group tabs used for display by ConfigurationOptions.
""" """
name: str def __init__(self, name: str, start: int, stop: int) -> None:
start: int """
stop: int Creates a ConfigGroup object.
:param name: configuration group display name
:param start: configurations start index for this group
:param stop: configurations stop index for this group
"""
self.name = name
self.start = start
self.stop = stop
@dataclass
class Configuration: class Configuration:
""" """
Represents a configuration option. Represents a configuration options.
""" """
id: str def __init__(
type: ConfigDataTypes self,
label: str = None _id: str,
default: str = "" _type: ConfigDataTypes,
options: list[str] = field(default_factory=list) label: str = None,
group: str = "Configuration" default: str = "",
options: List[str] = None,
) -> None:
"""
Creates a Configuration object.
def __post_init__(self) -> None: :param _id: unique name for configuration
self.label = self.label if self.label else self.id :param _type: configuration data type
if self.type == ConfigDataTypes.BOOL: :param label: configuration label for display
if self.default and self.default not in _BOOL_OPTIONS: :param default: default value for configuration
raise CoreConfigError( :param options: list options if this is a configuration with a combobox
f"{self.id} bool value must be one of: {_BOOL_OPTIONS}: " """
f"{self.default}" self.id = _id
) self.type = _type
elif self.type == ConfigDataTypes.FLOAT: self.default = default
if self.default: if not options:
try: options = []
float(self.default) self.options = options
except ValueError: if not label:
raise CoreConfigError( label = _id
f"{self.id} is not a valid float: {self.default}" self.label = label
)
elif self.type != ConfigDataTypes.STRING:
if self.default:
try:
int(self.default)
except ValueError:
raise CoreConfigError(
f"{self.id} is not a valid int: {self.default}"
)
def __str__(self):
@dataclass return f"{self.__class__.__name__}(id={self.id}, type={self.type}, default={self.default}, options={self.options})"
class ConfigBool(Configuration):
"""
Represents a boolean configuration option.
"""
type: ConfigDataTypes = ConfigDataTypes.BOOL
value: bool = False
@dataclass
class ConfigFloat(Configuration):
"""
Represents a float configuration option.
"""
type: ConfigDataTypes = ConfigDataTypes.FLOAT
value: float = 0.0
@dataclass
class ConfigInt(Configuration):
"""
Represents an integer configuration option.
"""
type: ConfigDataTypes = ConfigDataTypes.INT32
value: int = 0
@dataclass
class ConfigString(Configuration):
"""
Represents a string configuration option.
"""
type: ConfigDataTypes = ConfigDataTypes.STRING
value: str = ""
class ConfigurableOptions: class ConfigurableOptions:
@ -117,11 +75,12 @@ class ConfigurableOptions:
Provides a base for defining configuration options within CORE. Provides a base for defining configuration options within CORE.
""" """
name: Optional[str] = None name = None
options: list[Configuration] = [] bitmap = None
options = []
@classmethod @classmethod
def configurations(cls) -> list[Configuration]: def configurations(cls) -> List[Configuration]:
""" """
Provides the configurations for this class. Provides the configurations for this class.
@ -130,7 +89,7 @@ class ConfigurableOptions:
return cls.options return cls.options
@classmethod @classmethod
def config_groups(cls) -> list[ConfigGroup]: def config_groups(cls) -> List[ConfigGroup]:
""" """
Defines how configurations are grouped. Defines how configurations are grouped.
@ -139,7 +98,7 @@ class ConfigurableOptions:
return [ConfigGroup("Options", 1, len(cls.configurations()))] return [ConfigGroup("Options", 1, len(cls.configurations()))]
@classmethod @classmethod
def default_values(cls) -> dict[str, str]: def default_values(cls) -> Dict[str, str]:
""" """
Provides an ordered mapping of configuration keys to default values. Provides an ordered mapping of configuration keys to default values.
@ -156,8 +115,8 @@ class ConfigurableManager:
nodes. nodes.
""" """
_default_node: int = -1 _default_node = -1
_default_type: int = _default_node _default_type = _default_node
def __init__(self) -> None: def __init__(self) -> None:
""" """
@ -165,7 +124,7 @@ class ConfigurableManager:
""" """
self.node_configurations = {} self.node_configurations = {}
def nodes(self) -> list[int]: def nodes(self) -> List[int]:
""" """
Retrieves the ids of all node configurations known by this manager. Retrieves the ids of all node configurations known by this manager.
@ -177,8 +136,7 @@ class ConfigurableManager:
""" """
Clears all configurations or configuration for a specific node. Clears all configurations or configuration for a specific node.
:param node_id: node id to clear configurations for, default is None and clears :param node_id: node id to clear configurations for, default is None and clears all configurations
all configurations
:return: nothing :return: nothing
""" """
if not node_id: if not node_id:
@ -208,7 +166,7 @@ class ConfigurableManager:
def set_configs( def set_configs(
self, self,
config: dict[str, str], config: Dict[str, str],
node_id: int = _default_node, node_id: int = _default_node,
config_type: str = _default_type, config_type: str = _default_type,
) -> None: ) -> None:
@ -220,7 +178,7 @@ class ConfigurableManager:
:param config_type: configuration type to store configuration for :param config_type: configuration type to store configuration for
:return: nothing :return: nothing
""" """
logger.debug( logging.debug(
"setting config for node(%s) type(%s): %s", node_id, config_type, config "setting config for node(%s) type(%s): %s", node_id, config_type, config
) )
node_configs = self.node_configurations.setdefault(node_id, OrderedDict()) node_configs = self.node_configurations.setdefault(node_id, OrderedDict())
@ -250,7 +208,7 @@ class ConfigurableManager:
def get_configs( def get_configs(
self, node_id: int = _default_node, config_type: str = _default_type self, node_id: int = _default_node, config_type: str = _default_type
) -> Optional[dict[str, str]]: ) -> Dict[str, str]:
""" """
Retrieve configurations for a node and configuration type. Retrieve configurations for a node and configuration type.
@ -264,7 +222,7 @@ class ConfigurableManager:
result = node_configs.get(config_type) result = node_configs.get(config_type)
return result return result
def get_all_configs(self, node_id: int = _default_node) -> dict[str, Any]: def get_all_configs(self, node_id: int = _default_node) -> List[Dict[str, str]]:
""" """
Retrieve all current configuration types for a node. Retrieve all current configuration types for a node.
@ -284,11 +242,11 @@ class ModelManager(ConfigurableManager):
Creates a ModelManager object. Creates a ModelManager object.
""" """
super().__init__() super().__init__()
self.models: dict[str, Any] = {} self.models = {}
self.node_models: dict[int, str] = {} self.node_models = {}
def set_model_config( def set_model_config(
self, node_id: int, model_name: str, config: dict[str, str] = None self, node_id: int, model_name: str, config: Dict[str, str] = None
) -> None: ) -> None:
""" """
Set configuration data for a model. Set configuration data for a model.
@ -317,7 +275,7 @@ class ModelManager(ConfigurableManager):
# set configuration # set configuration
self.set_configs(model_config, node_id=node_id, config_type=model_name) self.set_configs(model_config, node_id=node_id, config_type=model_name)
def get_model_config(self, node_id: int, model_name: str) -> dict[str, str]: def get_model_config(self, node_id: int, model_name: str) -> Dict[str, str]:
""" """
Retrieve configuration data for a model. Retrieve configuration data for a model.
@ -342,7 +300,7 @@ class ModelManager(ConfigurableManager):
self, self,
node: Union[WlanNode, EmaneNet], node: Union[WlanNode, EmaneNet],
model_class: "WirelessModelType", model_class: "WirelessModelType",
config: dict[str, str] = None, config: Dict[str, str] = None,
) -> None: ) -> None:
""" """
Set model and model configuration for node. Set model and model configuration for node.
@ -352,7 +310,7 @@ class ModelManager(ConfigurableManager):
:param config: model configuration, None for default configuration :param config: model configuration, None for default configuration
:return: nothing :return: nothing
""" """
logger.debug( logging.debug(
"setting model(%s) for node(%s): %s", model_class.name, node.id, config "setting model(%s) for node(%s): %s", model_class.name, node.id, config
) )
self.set_model_config(node.id, model_class.name, config) self.set_model_config(node.id, model_class.name, config)
@ -361,7 +319,7 @@ class ModelManager(ConfigurableManager):
def get_models( def get_models(
self, node: Union[WlanNode, EmaneNet] self, node: Union[WlanNode, EmaneNet]
) -> list[tuple[type, dict[str, str]]]: ) -> List[Tuple[Type, Dict[str, str]]]:
""" """
Return a list of model classes and values for a net if one has been Return a list of model classes and values for a net if one has been
configured. This is invoked when exporting a session to XML. configured. This is invoked when exporting a session to XML.
@ -381,5 +339,5 @@ class ModelManager(ConfigurableManager):
model_class = self.models[model_name] model_class = self.models[model_name]
models.append((model_class, config)) models.append((model_class, config))
logger.debug("models for node(%s): %s", node.id, models) logging.debug("models for node(%s): %s", node.id, models)
return models return models

View file

@ -2,10 +2,9 @@ import abc
import enum import enum
import inspect import inspect
import logging import logging
import pathlib
import time import time
from dataclasses import dataclass from typing import Any, Dict, List
from pathlib import Path
from typing import Any, Optional
from mako import exceptions from mako import exceptions
from mako.lookup import TemplateLookup from mako.lookup import TemplateLookup
@ -15,22 +14,7 @@ from core.config import Configuration
from core.errors import CoreCommandError, CoreError from core.errors import CoreCommandError, CoreError
from core.nodes.base import CoreNode from core.nodes.base import CoreNode
logger = logging.getLogger(__name__) TEMPLATES_DIR = "templates"
TEMPLATES_DIR: str = "templates"
def get_template_path(file_path: Path) -> str:
"""
Utility to convert a given file path to a valid template path format.
:param file_path: file path to convert
:return: template path
"""
if file_path.is_absolute():
template_path = str(file_path.relative_to("/"))
else:
template_path = str(file_path)
return template_path
class ConfigServiceMode(enum.Enum): class ConfigServiceMode(enum.Enum):
@ -43,31 +27,16 @@ class ConfigServiceBootError(Exception):
pass pass
class ConfigServiceTemplateError(Exception):
pass
@dataclass
class ShadowDir:
path: str
src: Optional[str] = None
templates: bool = False
has_node_paths: bool = False
class ConfigService(abc.ABC): class ConfigService(abc.ABC):
""" """
Base class for creating configurable services. Base class for creating configurable services.
""" """
# validation period in seconds, how frequent validation is attempted # validation period in seconds, how frequent validation is attempted
validation_period: float = 0.5 validation_period = 0.5
# time to wait in seconds for determining if service started successfully # time to wait in seconds for determining if service started successfully
validation_timer: int = 5 validation_timer = 5
# directories to shadow and copy files from
shadow_directories: list[ShadowDir] = []
def __init__(self, node: CoreNode) -> None: def __init__(self, node: CoreNode) -> None:
""" """
@ -75,13 +44,13 @@ class ConfigService(abc.ABC):
:param node: node this service is assigned to :param node: node this service is assigned to
""" """
self.node: CoreNode = node self.node = node
class_file = inspect.getfile(self.__class__) class_file = inspect.getfile(self.__class__)
templates_path = Path(class_file).parent.joinpath(TEMPLATES_DIR) templates_path = pathlib.Path(class_file).parent.joinpath(TEMPLATES_DIR)
self.templates: TemplateLookup = TemplateLookup(directories=templates_path) self.templates = TemplateLookup(directories=templates_path)
self.config: dict[str, Configuration] = {} self.config = {}
self.custom_templates: dict[str, str] = {} self.custom_templates = {}
self.custom_config: dict[str, str] = {} self.custom_config = {}
configs = self.default_configs[:] configs = self.default_configs[:]
self._define_config(configs) self._define_config(configs)
@ -108,47 +77,47 @@ class ConfigService(abc.ABC):
@property @property
@abc.abstractmethod @abc.abstractmethod
def directories(self) -> list[str]: def directories(self) -> List[str]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def files(self) -> list[str]: def files(self) -> List[str]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def default_configs(self) -> list[Configuration]: def default_configs(self) -> List[Configuration]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def modes(self) -> dict[str, dict[str, str]]: def modes(self) -> Dict[str, Dict[str, str]]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def executables(self) -> list[str]: def executables(self) -> List[str]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def dependencies(self) -> list[str]: def dependencies(self) -> List[str]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def startup(self) -> list[str]: def startup(self) -> List[str]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def validate(self) -> list[str]: def validate(self) -> List[str]:
raise NotImplementedError raise NotImplementedError
@property @property
@abc.abstractmethod @abc.abstractmethod
def shutdown(self) -> list[str]: def shutdown(self) -> List[str]:
raise NotImplementedError raise NotImplementedError
@property @property
@ -164,8 +133,7 @@ class ConfigService(abc.ABC):
:return: nothing :return: nothing
:raises ConfigServiceBootError: when there is an error starting service :raises ConfigServiceBootError: when there is an error starting service
""" """
logger.info("node(%s) service(%s) starting...", self.node.name, self.name) logging.info("node(%s) service(%s) starting...", self.node.name, self.name)
self.create_shadow_dirs()
self.create_dirs() self.create_dirs()
self.create_files() self.create_files()
wait = self.validation_mode == ConfigServiceMode.BLOCKING wait = self.validation_mode == ConfigServiceMode.BLOCKING
@ -186,7 +154,7 @@ class ConfigService(abc.ABC):
try: try:
self.node.cmd(cmd) self.node.cmd(cmd)
except CoreCommandError: except CoreCommandError:
logger.exception( logging.exception(
f"node({self.node.name}) service({self.name}) " f"node({self.node.name}) service({self.name}) "
f"failed shutdown: {cmd}" f"failed shutdown: {cmd}"
) )
@ -200,64 +168,6 @@ class ConfigService(abc.ABC):
self.stop() self.stop()
self.start() self.start()
def create_shadow_dirs(self) -> None:
"""
Creates a shadow of a host system directory recursively
to be mapped and live within a node.
:return: nothing
:raises CoreError: when there is a failure creating a directory or file
"""
for shadow_dir in self.shadow_directories:
# setup shadow and src paths, using node unique paths when configured
shadow_path = Path(shadow_dir.path)
if shadow_dir.src is None:
src_path = shadow_path
else:
src_path = Path(shadow_dir.src)
if shadow_dir.has_node_paths:
src_path = src_path / self.node.name
# validate shadow and src paths
if not shadow_path.is_absolute():
raise CoreError(f"shadow dir({shadow_path}) is not absolute")
if not src_path.is_absolute():
raise CoreError(f"shadow source dir({src_path}) is not absolute")
if not src_path.is_dir():
raise CoreError(f"shadow source dir({src_path}) does not exist")
# create root of the shadow path within node
logger.info(
"node(%s) creating shadow directory(%s) src(%s) node paths(%s) "
"templates(%s)",
self.node.name,
shadow_path,
src_path,
shadow_dir.has_node_paths,
shadow_dir.templates,
)
self.node.create_dir(shadow_path)
# find all directories and files to create
dir_paths = []
file_paths = []
for path in src_path.rglob("*"):
shadow_src_path = shadow_path / path.relative_to(src_path)
if path.is_dir():
dir_paths.append(shadow_src_path)
else:
file_paths.append((path, shadow_src_path))
# create all directories within node
for path in dir_paths:
self.node.create_dir(path)
# create all files within node, from templates when configured
data = self.data()
templates = TemplateLookup(directories=src_path)
for path, dst_path in file_paths:
if shadow_dir.templates:
template = templates.get_template(path.name)
rendered = self._render(template, data)
self.node.create_file(dst_path, rendered)
else:
self.node.copy_file(path, dst_path)
def create_dirs(self) -> None: def create_dirs(self) -> None:
""" """
Creates directories for service. Creates directories for service.
@ -265,18 +175,16 @@ class ConfigService(abc.ABC):
:return: nothing :return: nothing
:raises CoreError: when there is a failure creating a directory :raises CoreError: when there is a failure creating a directory
""" """
logger.debug("creating config service directories") for directory in self.directories:
for directory in sorted(self.directories):
dir_path = Path(directory)
try: try:
self.node.create_dir(dir_path) self.node.privatedir(directory)
except (CoreCommandError, CoreError): except (CoreCommandError, ValueError):
raise CoreError( raise CoreError(
f"node({self.node.name}) service({self.name}) " f"node({self.node.name}) service({self.name}) "
f"failure to create service directory: {directory}" f"failure to create service directory: {directory}"
) )
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
""" """
Returns key/value data, used when rendering file templates. Returns key/value data, used when rendering file templates.
@ -303,7 +211,7 @@ class ConfigService(abc.ABC):
""" """
raise CoreError(f"service({self.name}) unknown template({name})") raise CoreError(f"service({self.name}) unknown template({name})")
def get_templates(self) -> dict[str, str]: def get_templates(self) -> Dict[str, str]:
""" """
Retrieves mapping of file names to templates for all cases, which Retrieves mapping of file names to templates for all cases, which
includes custom templates, file templates, and text templates. includes custom templates, file templates, and text templates.
@ -311,53 +219,19 @@ class ConfigService(abc.ABC):
:return: mapping of files to templates :return: mapping of files to templates
""" """
templates = {} templates = {}
for file in self.files: for name in self.files:
file_path = Path(file) basename = pathlib.Path(name).name
template_path = get_template_path(file_path) if name in self.custom_templates:
if file in self.custom_templates: template = self.custom_templates[name]
template = self.custom_templates[file]
template = self.clean_text(template) template = self.clean_text(template)
elif self.templates.has_template(template_path): elif self.templates.has_template(basename):
template = self.templates.get_template(template_path).source template = self.templates.get_template(basename).source
else: else:
try: template = self.get_text_template(name)
template = self.get_text_template(file)
except Exception as e:
raise ConfigServiceTemplateError(
f"node({self.node.name}) service({self.name}) file({file}) "
f"failure getting template: {e}"
)
template = self.clean_text(template) template = self.clean_text(template)
templates[file] = template templates[name] = template
return templates return templates
def get_rendered_templates(self) -> dict[str, str]:
templates = {}
data = self.data()
for file in sorted(self.files):
rendered = self._get_rendered_template(file, data)
templates[file] = rendered
return templates
def _get_rendered_template(self, file: str, data: dict[str, Any]) -> str:
file_path = Path(file)
template_path = get_template_path(file_path)
if file in self.custom_templates:
text = self.custom_templates[file]
rendered = self.render_text(text, data)
elif self.templates.has_template(template_path):
rendered = self.render_template(template_path, data)
else:
try:
text = self.get_text_template(file)
except Exception as e:
raise ConfigServiceTemplateError(
f"node({self.node.name}) service({self.name}) file({file}) "
f"failure getting template: {e}"
)
rendered = self.render_text(text, data)
return rendered
def create_files(self) -> None: def create_files(self) -> None:
""" """
Creates service files inside associated node. Creates service files inside associated node.
@ -365,13 +239,24 @@ class ConfigService(abc.ABC):
:return: nothing :return: nothing
""" """
data = self.data() data = self.data()
for file in sorted(self.files): for name in self.files:
logger.debug( basename = pathlib.Path(name).name
"node(%s) service(%s) template(%s)", self.node.name, self.name, file if name in self.custom_templates:
text = self.custom_templates[name]
rendered = self.render_text(text, data)
elif self.templates.has_template(basename):
rendered = self.render_template(basename, data)
else:
text = self.get_text_template(name)
rendered = self.render_text(text, data)
logging.debug(
"node(%s) service(%s) template(%s): \n%s",
self.node.name,
self.name,
name,
rendered,
) )
rendered = self._get_rendered_template(file, data) self.node.nodefile(name, rendered)
file_path = Path(file)
self.node.create_file(file_path, rendered)
def run_startup(self, wait: bool) -> None: def run_startup(self, wait: bool) -> None:
""" """
@ -415,7 +300,7 @@ class ConfigService(abc.ABC):
del cmds[index] del cmds[index]
index += 1 index += 1
except CoreCommandError: except CoreCommandError:
logger.debug( logging.debug(
f"node({self.node.name}) service({self.name}) " f"node({self.node.name}) service({self.name}) "
f"validate command failed: {cmd}" f"validate command failed: {cmd}"
) )
@ -426,7 +311,7 @@ class ConfigService(abc.ABC):
f"node({self.node.name}) service({self.name}) failed to validate" f"node({self.node.name}) service({self.name}) failed to validate"
) )
def _render(self, template: Template, data: dict[str, Any] = None) -> str: def _render(self, template: Template, data: Dict[str, Any] = None) -> str:
""" """
Renders template providing all associated data to template. Renders template providing all associated data to template.
@ -440,7 +325,7 @@ class ConfigService(abc.ABC):
node=self.node, config=self.render_config(), **data node=self.node, config=self.render_config(), **data
) )
def render_text(self, text: str, data: dict[str, Any] = None) -> str: def render_text(self, text: str, data: Dict[str, Any] = None) -> str:
""" """
Renders text based template providing all associated data to template. Renders text based template providing all associated data to template.
@ -458,24 +343,24 @@ class ConfigService(abc.ABC):
f"{exceptions.text_error_template().render_unicode()}" f"{exceptions.text_error_template().render_unicode()}"
) )
def render_template(self, template_path: str, data: dict[str, Any] = None) -> str: def render_template(self, basename: str, data: Dict[str, Any] = None) -> str:
""" """
Renders file based template providing all associated data to template. Renders file based template providing all associated data to template.
:param template_path: path of file to render :param basename: base name for file to render
:param data: service specific defined data for template :param data: service specific defined data for template
:return: rendered template :return: rendered template
""" """
try: try:
template = self.templates.get_template(template_path) template = self.templates.get_template(basename)
return self._render(template, data) return self._render(template, data)
except Exception: except Exception:
raise CoreError( raise CoreError(
f"node({self.node.name}) service({self.name}) file({template_path})" f"node({self.node.name}) service({self.name}) "
f"{exceptions.text_error_template().render_unicode()}" f"{exceptions.text_error_template().render_template()}"
) )
def _define_config(self, configs: list[Configuration]) -> None: def _define_config(self, configs: List[Configuration]) -> None:
""" """
Initializes default configuration data. Initializes default configuration data.
@ -485,7 +370,7 @@ class ConfigService(abc.ABC):
for config in configs: for config in configs:
self.config[config.id] = config self.config[config.id] = config
def render_config(self) -> dict[str, str]: def render_config(self) -> Dict[str, str]:
""" """
Returns configuration data key/value pairs for rendering a template. Returns configuration data key/value pairs for rendering a template.
@ -496,7 +381,7 @@ class ConfigService(abc.ABC):
else: else:
return {k: v.default for k, v in self.config.items()} return {k: v.default for k, v in self.config.items()}
def set_config(self, data: dict[str, str]) -> None: def set_config(self, data: Dict[str, str]) -> None:
""" """
Set configuration data from key/value pairs. Set configuration data from key/value pairs.

View file

@ -1,7 +1,5 @@
import logging import logging
from typing import TYPE_CHECKING from typing import TYPE_CHECKING, Dict, List
logger = logging.getLogger(__name__)
if TYPE_CHECKING: if TYPE_CHECKING:
from core.configservice.base import ConfigService from core.configservice.base import ConfigService
@ -12,16 +10,16 @@ class ConfigServiceDependencies:
Generates sets of services to start in order of their dependencies. Generates sets of services to start in order of their dependencies.
""" """
def __init__(self, services: dict[str, "ConfigService"]) -> None: def __init__(self, services: Dict[str, "ConfigService"]) -> None:
""" """
Create a ConfigServiceDependencies instance. Create a ConfigServiceDependencies instance.
:param services: services for determining dependency sets :param services: services for determining dependency sets
""" """
# helpers to check validity # helpers to check validity
self.dependents: dict[str, set[str]] = {} self.dependents = {}
self.started: set[str] = set() self.started = set()
self.node_services: dict[str, "ConfigService"] = {} self.node_services = {}
for service in services.values(): for service in services.values():
self.node_services[service.name] = service self.node_services[service.name] = service
for dependency in service.dependencies: for dependency in service.dependencies:
@ -29,11 +27,11 @@ class ConfigServiceDependencies:
dependents.add(service.name) dependents.add(service.name)
# used to find paths # used to find paths
self.path: list["ConfigService"] = [] self.path = []
self.visited: set[str] = set() self.visited = set()
self.visiting: set[str] = set() self.visiting = set()
def startup_paths(self) -> list[list["ConfigService"]]: def startup_paths(self) -> List[List["ConfigService"]]:
""" """
Find startup path sets based on service dependencies. Find startup path sets based on service dependencies.
@ -43,7 +41,7 @@ class ConfigServiceDependencies:
for name in self.node_services: for name in self.node_services:
service = self.node_services[name] service = self.node_services[name]
if service.name in self.started: if service.name in self.started:
logger.debug( logging.debug(
"skipping service that will already be started: %s", service.name "skipping service that will already be started: %s", service.name
) )
continue continue
@ -54,8 +52,8 @@ class ConfigServiceDependencies:
if self.started != set(self.node_services): if self.started != set(self.node_services):
raise ValueError( raise ValueError(
f"failure to start all services: {self.started} != " "failure to start all services: %s != %s"
f"{self.node_services.keys()}" % (self.started, self.node_services.keys())
) )
return paths return paths
@ -70,25 +68,25 @@ class ConfigServiceDependencies:
self.visited.clear() self.visited.clear()
self.visiting.clear() self.visiting.clear()
def _start(self, service: "ConfigService") -> list["ConfigService"]: def _start(self, service: "ConfigService") -> List["ConfigService"]:
""" """
Starts a oath for checking dependencies for a given service. Starts a oath for checking dependencies for a given service.
:param service: service to check dependencies for :param service: service to check dependencies for
:return: list of config services to start in order :return: list of config services to start in order
""" """
logger.debug("starting service dependency check: %s", service.name) logging.debug("starting service dependency check: %s", service.name)
self._reset() self._reset()
return self._visit(service) return self._visit(service)
def _visit(self, current_service: "ConfigService") -> list["ConfigService"]: def _visit(self, current_service: "ConfigService") -> List["ConfigService"]:
""" """
Visits a service when discovering dependency chains for service. Visits a service when discovering dependency chains for service.
:param current_service: service being visited :param current_service: service being visited
:return: list of dependent services for a visited service :return: list of dependent services for a visited service
""" """
logger.debug("visiting service(%s): %s", current_service.name, self.path) logging.debug("visiting service(%s): %s", current_service.name, self.path)
self.visited.add(current_service.name) self.visited.add(current_service.name)
self.visiting.add(current_service.name) self.visiting.add(current_service.name)
@ -96,14 +94,14 @@ class ConfigServiceDependencies:
for service_name in current_service.dependencies: for service_name in current_service.dependencies:
if service_name not in self.node_services: if service_name not in self.node_services:
raise ValueError( raise ValueError(
"required dependency was not included in node " "required dependency was not included in node services: %s"
f"services: {service_name}" % service_name
) )
if service_name in self.visiting: if service_name in self.visiting:
raise ValueError( raise ValueError(
f"cyclic dependency at service({current_service.name}): " "cyclic dependency at service(%s): %s"
f"{service_name}" % (current_service.name, service_name)
) )
if service_name not in self.visited: if service_name not in self.visited:
@ -111,7 +109,7 @@ class ConfigServiceDependencies:
self._visit(service) self._visit(service)
# add service when bottom is found # add service when bottom is found
logger.debug("adding service to startup path: %s", current_service.name) logging.debug("adding service to startup path: %s", current_service.name)
self.started.add(current_service.name) self.started.add(current_service.name)
self.path.append(current_service) self.path.append(current_service)
self.visiting.remove(current_service.name) self.visiting.remove(current_service.name)

View file

@ -1,14 +1,11 @@
import logging import logging
import pathlib import pathlib
import pkgutil from typing import List, Type
from pathlib import Path
from core import configservices, utils from core import utils
from core.configservice.base import ConfigService from core.configservice.base import ConfigService
from core.errors import CoreError from core.errors import CoreError
logger = logging.getLogger(__name__)
class ConfigServiceManager: class ConfigServiceManager:
""" """
@ -19,9 +16,9 @@ class ConfigServiceManager:
""" """
Create a ConfigServiceManager instance. Create a ConfigServiceManager instance.
""" """
self.services: dict[str, type[ConfigService]] = {} self.services = {}
def get_service(self, name: str) -> type[ConfigService]: def get_service(self, name: str) -> Type[ConfigService]:
""" """
Retrieve a service by name. Retrieve a service by name.
@ -31,10 +28,10 @@ class ConfigServiceManager:
""" """
service_class = self.services.get(name) service_class = self.services.get(name)
if service_class is None: if service_class is None:
raise CoreError(f"service does not exist {name}") raise CoreError(f"service does not exit {name}")
return service_class return service_class
def add(self, service: type[ConfigService]) -> None: def add(self, service: ConfigService) -> None:
""" """
Add service to manager, checking service requirements have been met. Add service to manager, checking service requirements have been met.
@ -43,9 +40,7 @@ class ConfigServiceManager:
:raises CoreError: when service is a duplicate or has unmet executables :raises CoreError: when service is a duplicate or has unmet executables
""" """
name = service.name name = service.name
logger.debug( logging.debug("loading service: class(%s) name(%s)", service.__class__, name)
"loading service: class(%s) name(%s)", service.__class__.__name__, name
)
# avoid duplicate services # avoid duplicate services
if name in self.services: if name in self.services:
@ -55,49 +50,33 @@ class ConfigServiceManager:
for executable in service.executables: for executable in service.executables:
try: try:
utils.which(executable, required=True) utils.which(executable, required=True)
except CoreError as e: except ValueError:
raise CoreError(f"config service({service.name}): {e}") raise CoreError(
f"service({service.name}) missing executable {executable}"
)
# make service available # make service available
self.services[name] = service self.services[name] = service
def load_locals(self) -> list[str]: def load(self, path: str) -> List[str]:
""" """
Search and add config service from local core module. Search path provided for configurable services and add them for being managed.
:return: list of errors when loading services
"""
errors = []
for module_info in pkgutil.walk_packages(
configservices.__path__, f"{configservices.__name__}."
):
services = utils.load_module(module_info.name, ConfigService)
for service in services:
try:
self.add(service)
except CoreError as e:
errors.append(service.name)
logger.debug("not loading config service(%s): %s", service.name, e)
return errors
def load(self, path: Path) -> list[str]:
"""
Search path provided for config services and add them for being managed.
:param path: path to search configurable services :param path: path to search configurable services
:return: list errors when loading services :return: list errors when loading and adding services
""" """
path = pathlib.Path(path) path = pathlib.Path(path)
subdirs = [x for x in path.iterdir() if x.is_dir()] subdirs = [x for x in path.iterdir() if x.is_dir()]
subdirs.append(path) subdirs.append(path)
service_errors = [] service_errors = []
for subdir in subdirs: for subdir in subdirs:
logger.debug("loading config services from: %s", subdir) logging.debug("loading config services from: %s", subdir)
services = utils.load_classes(subdir, ConfigService) services = utils.load_classes(str(subdir), ConfigService)
for service in services: for service in services:
logging.debug("found service: %s", service)
try: try:
self.add(service) self.add(service)
except CoreError as e: except CoreError as e:
service_errors.append(service.name) service_errors.append(service.name)
logger.debug("not loading service(%s): %s", service.name, e) logging.debug("not loading service(%s): %s", service.name, e)
return service_errors return service_errors

View file

@ -1,56 +1,45 @@
import abc import abc
from typing import Any from typing import Any, Dict
from core.config import Configuration import netaddr
from core import constants
from core.configservice.base import ConfigService, ConfigServiceMode from core.configservice.base import ConfigService, ConfigServiceMode
from core.emane.nodes import EmaneNet from core.emane.nodes import EmaneNet
from core.nodes.base import CoreNodeBase, NodeBase from core.nodes.base import CoreNodeBase
from core.nodes.interface import DEFAULT_MTU, CoreInterface from core.nodes.interface import CoreInterface
from core.nodes.network import PtpNet, WlanNode from core.nodes.network import WlanNode
from core.nodes.physical import Rj45Node
from core.nodes.wireless import WirelessNode
GROUP: str = "FRR" GROUP = "FRR"
FRR_STATE_DIR: str = "/var/run/frr"
def is_wireless(node: NodeBase) -> bool: def has_mtu_mismatch(ifc: CoreInterface) -> bool:
"""
Check if the node is a wireless type node.
:param node: node to check type for
:return: True if wireless type, False otherwise
"""
return isinstance(node, (WlanNode, EmaneNet, WirelessNode))
def has_mtu_mismatch(iface: CoreInterface) -> bool:
""" """
Helper to detect MTU mismatch and add the appropriate FRR Helper to detect MTU mismatch and add the appropriate FRR
mtu-ignore command. This is needed when e.g. a node is linked via a mtu-ignore command. This is needed when e.g. a node is linked via a
GreTap device. GreTap device.
""" """
if iface.mtu != DEFAULT_MTU: if ifc.mtu != 1500:
return True return True
if not iface.net: if not ifc.net:
return False return False
for iface in iface.net.get_ifaces(): for i in ifc.net.netifs():
if iface.mtu != iface.mtu: if i.mtu != ifc.mtu:
return True return True
return False return False
def get_min_mtu(iface: CoreInterface) -> int: def get_min_mtu(ifc):
""" """
Helper to discover the minimum MTU of interfaces linked with the Helper to discover the minimum MTU of interfaces linked with the
given interface. given interface.
""" """
mtu = iface.mtu mtu = ifc.mtu
if not iface.net: if not ifc.net:
return mtu return mtu
for iface in iface.net.get_ifaces(): for i in ifc.net.netifs():
if iface.mtu < mtu: if i.mtu < mtu:
mtu = iface.mtu mtu = i.mtu
return mtu return mtu
@ -58,54 +47,42 @@ def get_router_id(node: CoreNodeBase) -> str:
""" """
Helper to return the first IPv4 address of a node as its router ID. Helper to return the first IPv4 address of a node as its router ID.
""" """
for iface in node.get_ifaces(control=False): for ifc in node.netifs():
ip4 = iface.get_ip4() if getattr(ifc, "control", False):
if ip4: continue
return str(ip4.ip) for a in ifc.addrlist:
a = a.split("/")[0]
if netaddr.valid_ipv4(a):
return a
return "0.0.0.0" return "0.0.0.0"
def rj45_check(iface: CoreInterface) -> bool:
"""
Helper to detect whether interface is connected an external RJ45
link.
"""
if iface.net:
for peer_iface in iface.net.get_ifaces():
if peer_iface == iface:
continue
if isinstance(peer_iface.node, Rj45Node):
return True
return False
class FRRZebra(ConfigService): class FRRZebra(ConfigService):
name: str = "FRRzebra" name = "FRRzebra"
group: str = GROUP group = GROUP
directories: list[str] = ["/usr/local/etc/frr", "/var/run/frr", "/var/log/frr"] directories = ["/usr/local/etc/frr", "/var/run/frr", "/var/log/frr"]
files: list[str] = [ files = [
"/usr/local/etc/frr/frr.conf", "/usr/local/etc/frr/frr.conf",
"frrboot.sh", "frrboot.sh",
"/usr/local/etc/frr/vtysh.conf", "/usr/local/etc/frr/vtysh.conf",
"/usr/local/etc/frr/daemons", "/usr/local/etc/frr/daemons",
] ]
executables: list[str] = ["zebra"] executables = ["zebra"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash frrboot.sh zebra"] startup = ["sh frrboot.sh zebra"]
validate: list[str] = ["pidof zebra"] validate = ["pidof zebra"]
shutdown: list[str] = ["killall zebra"] shutdown = ["killall zebra"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
frr_conf = self.files[0] frr_conf = self.files[0]
frr_bin_search = self.node.session.options.get( frr_bin_search = self.node.session.options.get_config(
"frr_bin_search", default="/usr/local/bin /usr/bin /usr/lib/frr" "frr_bin_search", default="/usr/local/bin /usr/bin /usr/lib/frr"
).strip('"') ).strip('"')
frr_sbin_search = self.node.session.options.get( frr_sbin_search = self.node.session.options.get_config(
"frr_sbin_search", "frr_sbin_search", default="/usr/local/sbin /usr/sbin /usr/lib/frr"
default="/usr/local/sbin /usr/sbin /usr/lib/frr /usr/libexec/frr",
).strip('"') ).strip('"')
services = [] services = []
@ -114,30 +91,31 @@ class FRRZebra(ConfigService):
for service in self.node.config_services.values(): for service in self.node.config_services.values():
if self.name not in service.dependencies: if self.name not in service.dependencies:
continue continue
if not isinstance(service, FrrService):
continue
if service.ipv4_routing: if service.ipv4_routing:
want_ip4 = True want_ip4 = True
if service.ipv6_routing: if service.ipv6_routing:
want_ip6 = True want_ip6 = True
services.append(service) services.append(service)
ifaces = [] interfaces = []
for iface in self.node.get_ifaces(): for ifc in self.node.netifs():
ip4s = [] ip4s = []
ip6s = [] ip6s = []
for ip4 in iface.ip4s: for x in ifc.addrlist:
ip4s.append(str(ip4.ip)) addr = x.split("/")[0]
for ip6 in iface.ip6s: if netaddr.valid_ipv4(addr):
ip6s.append(str(ip6.ip)) ip4s.append(x)
ifaces.append((iface, ip4s, ip6s, iface.control)) else:
ip6s.append(x)
is_control = getattr(ifc, "control", False)
interfaces.append((ifc, ip4s, ip6s, is_control))
return dict( return dict(
frr_conf=frr_conf, frr_conf=frr_conf,
frr_sbin_search=frr_sbin_search, frr_sbin_search=frr_sbin_search,
frr_bin_search=frr_bin_search, frr_bin_search=frr_bin_search,
frr_state_dir=FRR_STATE_DIR, frr_state_dir=constants.FRR_STATE_DIR,
ifaces=ifaces, interfaces=interfaces,
want_ip4=want_ip4, want_ip4=want_ip4,
want_ip6=want_ip6, want_ip6=want_ip6,
services=services, services=services,
@ -145,22 +123,22 @@ class FRRZebra(ConfigService):
class FrrService(abc.ABC): class FrrService(abc.ABC):
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = [] files = []
executables: list[str] = [] executables = []
dependencies: list[str] = ["FRRzebra"] dependencies = ["FRRzebra"]
startup: list[str] = [] startup = []
validate: list[str] = [] validate = []
shutdown: list[str] = [] shutdown = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
ipv4_routing: bool = False ipv4_routing = False
ipv6_routing: bool = False ipv6_routing = False
@abc.abstractmethod @abc.abstractmethod
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
raise NotImplementedError raise NotImplementedError
@abc.abstractmethod @abc.abstractmethod
@ -175,17 +153,22 @@ class FRROspfv2(FrrService, ConfigService):
unified frr.conf file. unified frr.conf file.
""" """
name: str = "FRROSPFv2" name = "FRROSPFv2"
shutdown: list[str] = ["killall ospfd"] startup = ()
validate: list[str] = ["pidof ospfd"] shutdown = ["killall ospfd"]
ipv4_routing: bool = True validate = ["pidof ospfd"]
ipv4_routing = True
def frr_config(self) -> str: def frr_config(self) -> str:
router_id = get_router_id(self.node) router_id = get_router_id(self.node)
addresses = [] addresses = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
for ip4 in iface.ip4s: if getattr(ifc, "control", False):
addresses.append(str(ip4)) continue
for a in ifc.addrlist:
addr = a.split("/")[0]
if netaddr.valid_ipv4(addr):
addresses.append(a)
data = dict(router_id=router_id, addresses=addresses) data = dict(router_id=router_id, addresses=addresses)
text = """ text = """
router ospf router ospf
@ -193,31 +176,15 @@ class FRROspfv2(FrrService, ConfigService):
% for addr in addresses: % for addr in addresses:
network ${addr} area 0 network ${addr} area 0
% endfor % endfor
ospf opaque-lsa
! !
""" """
return self.render_text(text, data) return self.render_text(text, data)
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
has_mtu = has_mtu_mismatch(iface) if has_mtu_mismatch(ifc):
has_rj45 = rj45_check(iface) return "ip ospf mtu-ignore"
is_ptp = isinstance(iface.net, PtpNet) else:
data = dict(has_mtu=has_mtu, is_ptp=is_ptp, has_rj45=has_rj45) return ""
text = """
% if has_mtu:
ip ospf mtu-ignore
% endif
% if has_rj45:
<% return STOP_RENDERING %>
% endif
% if is_ptp:
ip ospf network point-to-point
% endif
ip ospf hello-interval 2
ip ospf dead-interval 6
ip ospf retransmit-interval 5
"""
return self.render_text(text, data)
class FRROspfv3(FrrService, ConfigService): class FRROspfv3(FrrService, ConfigService):
@ -227,17 +194,19 @@ class FRROspfv3(FrrService, ConfigService):
unified frr.conf file. unified frr.conf file.
""" """
name: str = "FRROSPFv3" name = "FRROSPFv3"
shutdown: list[str] = ["killall ospf6d"] shutdown = ["killall ospf6d"]
validate: list[str] = ["pidof ospf6d"] validate = ["pidof ospf6d"]
ipv4_routing: bool = True ipv4_routing = True
ipv6_routing: bool = True ipv6_routing = True
def frr_config(self) -> str: def frr_config(self) -> str:
router_id = get_router_id(self.node) router_id = get_router_id(self.node)
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
data = dict(router_id=router_id, ifnames=ifnames) data = dict(router_id=router_id, ifnames=ifnames)
text = """ text = """
router ospf6 router ospf6
@ -249,9 +218,9 @@ class FRROspfv3(FrrService, ConfigService):
""" """
return self.render_text(text, data) return self.render_text(text, data)
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
mtu = get_min_mtu(iface) mtu = get_min_mtu(ifc)
if mtu < iface.mtu: if mtu < ifc.mtu:
return f"ipv6 ospf6 ifmtu {mtu}" return f"ipv6 ospf6 ifmtu {mtu}"
else: else:
return "" return ""
@ -264,12 +233,12 @@ class FRRBgp(FrrService, ConfigService):
having the same AS number. having the same AS number.
""" """
name: str = "FRRBGP" name = "FRRBGP"
shutdown: list[str] = ["killall bgpd"] shutdown = ["killall bgpd"]
validate: list[str] = ["pidof bgpd"] validate = ["pidof bgpd"]
custom_needed: bool = True custom_needed = True
ipv4_routing: bool = True ipv4_routing = True
ipv6_routing: bool = True ipv6_routing = True
def frr_config(self) -> str: def frr_config(self) -> str:
router_id = get_router_id(self.node) router_id = get_router_id(self.node)
@ -285,7 +254,7 @@ class FRRBgp(FrrService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
return "" return ""
@ -294,10 +263,10 @@ class FRRRip(FrrService, ConfigService):
The RIP service provides IPv4 routing for wired networks. The RIP service provides IPv4 routing for wired networks.
""" """
name: str = "FRRRIP" name = "FRRRIP"
shutdown: list[str] = ["killall ripd"] shutdown = ["killall ripd"]
validate: list[str] = ["pidof ripd"] validate = ["pidof ripd"]
ipv4_routing: bool = True ipv4_routing = True
def frr_config(self) -> str: def frr_config(self) -> str:
text = """ text = """
@ -310,7 +279,7 @@ class FRRRip(FrrService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
return "" return ""
@ -319,10 +288,10 @@ class FRRRipng(FrrService, ConfigService):
The RIP NG service provides IPv6 routing for wired networks. The RIP NG service provides IPv6 routing for wired networks.
""" """
name: str = "FRRRIPNG" name = "FRRRIPNG"
shutdown: list[str] = ["killall ripngd"] shutdown = ["killall ripngd"]
validate: list[str] = ["pidof ripngd"] validate = ["pidof ripngd"]
ipv6_routing: bool = True ipv6_routing = True
def frr_config(self) -> str: def frr_config(self) -> str:
text = """ text = """
@ -335,7 +304,7 @@ class FRRRipng(FrrService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
return "" return ""
@ -345,15 +314,17 @@ class FRRBabel(FrrService, ConfigService):
protocol for IPv6 and IPv4 with fast convergence properties. protocol for IPv6 and IPv4 with fast convergence properties.
""" """
name: str = "FRRBabel" name = "FRRBabel"
shutdown: list[str] = ["killall babeld"] shutdown = ["killall babeld"]
validate: list[str] = ["pidof babeld"] validate = ["pidof babeld"]
ipv6_routing: bool = True ipv6_routing = True
def frr_config(self) -> str: def frr_config(self) -> str:
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
text = """ text = """
router babel router babel
% for ifname in ifnames: % for ifname in ifnames:
@ -366,8 +337,8 @@ class FRRBabel(FrrService, ConfigService):
data = dict(ifnames=ifnames) data = dict(ifnames=ifnames)
return self.render_text(text, data) return self.render_text(text, data)
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
if is_wireless(iface.net): if isinstance(ifc.net, (WlanNode, EmaneNet)):
text = """ text = """
babel wireless babel wireless
no babel split-horizon no babel split-horizon
@ -385,16 +356,16 @@ class FRRpimd(FrrService, ConfigService):
PIM multicast routing based on XORP. PIM multicast routing based on XORP.
""" """
name: str = "FRRpimd" name = "FRRpimd"
shutdown: list[str] = ["killall pimd"] shutdown = ["killall pimd"]
validate: list[str] = ["pidof pimd"] validate = ["pidof pimd"]
ipv4_routing: bool = True ipv4_routing = True
def frr_config(self) -> str: def frr_config(self) -> str:
ifname = "eth0" ifname = "eth0"
for iface in self.node.get_ifaces(): for ifc in self.node.netifs():
if iface.name != "lo": if ifc.name != "lo":
ifname = iface.name ifname = ifc.name
break break
text = f""" text = f"""
@ -411,7 +382,7 @@ class FRRpimd(FrrService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def frr_iface_config(self, iface: CoreInterface) -> str: def frr_interface_config(self, ifc: CoreInterface) -> str:
text = """ text = """
ip mfea ip mfea
ip igmp ip igmp

View file

@ -20,7 +20,6 @@ nhrpd=yes
eigrpd=yes eigrpd=yes
babeld=yes babeld=yes
sharpd=yes sharpd=yes
staticd=yes
pbrd=yes pbrd=yes
bfdd=yes bfdd=yes
fabricd=yes fabricd=yes

View file

@ -1,5 +1,5 @@
% for iface, ip4s, ip6s, is_control in ifaces: % for ifc, ip4s, ip6s, is_control in interfaces:
interface ${iface.name} interface ${ifc.name}
% if want_ip4: % if want_ip4:
% for addr in ip4s: % for addr in ip4s:
ip address ${addr} ip address ${addr}
@ -12,7 +12,7 @@ interface ${iface.name}
% endif % endif
% if not is_control: % if not is_control:
% for service in services: % for service in services:
% for line in service.frr_iface_config(iface).split("\n"): % for line in service.frr_interface_config(ifc).split("\n"):
${line} ${line}
% endfor % endfor
% endfor % endfor

View file

@ -48,10 +48,6 @@ bootdaemon()
flags="$flags -6" flags="$flags -6"
fi fi
if [ "$1" = "ospfd" ]; then
flags="$flags --apiserver"
fi
#force FRR to use CORE generated conf file #force FRR to use CORE generated conf file
flags="$flags -d -f $FRR_CONF" flags="$flags -d -f $FRR_CONF"
$FRR_SBIN_DIR/$1 $flags $FRR_SBIN_DIR/$1 $flags
@ -102,8 +98,8 @@ confcheck
bootfrr bootfrr
# reset interfaces # reset interfaces
% for iface, _, _ , _ in ifaces: % for ifc, _, _ , _ in interfaces:
ip link set dev ${iface.name} down ip link set dev ${ifc.name} down
sleep 1 sleep 1
ip link set dev ${iface.name} up ip link set dev ${ifc.name} up
% endfor % endfor

View file

@ -1,164 +1,212 @@
from typing import Any from typing import Any, Dict
import netaddr
from core import utils from core import utils
from core.config import Configuration
from core.configservice.base import ConfigService, ConfigServiceMode from core.configservice.base import ConfigService, ConfigServiceMode
GROUP: str = "ProtoSvc" GROUP = "ProtoSvc"
class MgenSinkService(ConfigService): class MgenSinkService(ConfigService):
name: str = "MGEN_Sink" name = "MGEN_Sink"
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = ["mgensink.sh", "sink.mgen"] files = ["mgensink.sh", "sink.mgen"]
executables: list[str] = ["mgen"] executables = ["mgen"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash mgensink.sh"] startup = ["sh mgensink.sh"]
validate: list[str] = ["pidof mgen"] validate = ["pidof mgen"]
shutdown: list[str] = ["killall mgen"] shutdown = ["killall mgen"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(): for ifc in self.node.netifs():
name = utils.sysctl_devname(iface.name) name = utils.sysctl_devname(ifc.name)
ifnames.append(name) ifnames.append(name)
return dict(ifnames=ifnames) return dict(ifnames=ifnames)
class NrlNhdp(ConfigService): class NrlNhdp(ConfigService):
name: str = "NHDP" name = "NHDP"
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = ["nrlnhdp.sh"] files = ["nrlnhdp.sh"]
executables: list[str] = ["nrlnhdp"] executables = ["nrlnhdp"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash nrlnhdp.sh"] startup = ["sh nrlnhdp.sh"]
validate: list[str] = ["pidof nrlnhdp"] validate = ["pidof nrlnhdp"]
shutdown: list[str] = ["killall nrlnhdp"] shutdown = ["killall nrlnhdp"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
has_smf = "SMF" in self.node.config_services has_smf = "SMF" in self.node.config_services
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
return dict(has_smf=has_smf, ifnames=ifnames) return dict(has_smf=has_smf, ifnames=ifnames)
class NrlSmf(ConfigService): class NrlSmf(ConfigService):
name: str = "SMF" name = "SMF"
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = ["startsmf.sh"] files = ["startsmf.sh"]
executables: list[str] = ["nrlsmf", "killall"] executables = ["nrlsmf", "killall"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash startsmf.sh"] startup = ["sh startsmf.sh"]
validate: list[str] = ["pidof nrlsmf"] validate = ["pidof nrlsmf"]
shutdown: list[str] = ["killall nrlsmf"] shutdown = ["killall nrlsmf"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
has_arouted = "arouted" in self.node.config_services
has_nhdp = "NHDP" in self.node.config_services has_nhdp = "NHDP" in self.node.config_services
has_olsr = "OLSR" in self.node.config_services has_olsr = "OLSR" in self.node.config_services
ifnames = [] ifnames = []
ip4_prefix = None ip4_prefix = None
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
ip4 = iface.get_ip4() continue
if ip4: ifnames.append(ifc.name)
ip4_prefix = f"{ip4.ip}/{24}" if ip4_prefix:
break continue
for a in ifc.addrlist:
a = a.split("/")[0]
if netaddr.valid_ipv4(a):
ip4_prefix = f"{a}/{24}"
break
return dict( return dict(
has_nhdp=has_nhdp, has_olsr=has_olsr, ifnames=ifnames, ip4_prefix=ip4_prefix has_arouted=has_arouted,
has_nhdp=has_nhdp,
has_olsr=has_olsr,
ifnames=ifnames,
ip4_prefix=ip4_prefix,
) )
class NrlOlsr(ConfigService): class NrlOlsr(ConfigService):
name: str = "OLSR" name = "OLSR"
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = ["nrlolsrd.sh"] files = ["nrlolsrd.sh"]
executables: list[str] = ["nrlolsrd"] executables = ["nrlolsrd"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash nrlolsrd.sh"] startup = ["sh nrlolsrd.sh"]
validate: list[str] = ["pidof nrlolsrd"] validate = ["pidof nrlolsrd"]
shutdown: list[str] = ["killall nrlolsrd"] shutdown = ["killall nrlolsrd"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
has_smf = "SMF" in self.node.config_services has_smf = "SMF" in self.node.config_services
has_zebra = "zebra" in self.node.config_services has_zebra = "zebra" in self.node.config_services
ifname = None ifname = None
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifname = iface.name if getattr(ifc, "control", False):
continue
ifname = ifc.name
break break
return dict(has_smf=has_smf, has_zebra=has_zebra, ifname=ifname) return dict(has_smf=has_smf, has_zebra=has_zebra, ifname=ifname)
class NrlOlsrv2(ConfigService): class NrlOlsrv2(ConfigService):
name: str = "OLSRv2" name = "OLSRv2"
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = ["nrlolsrv2.sh"] files = ["nrlolsrv2.sh"]
executables: list[str] = ["nrlolsrv2"] executables = ["nrlolsrv2"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash nrlolsrv2.sh"] startup = ["sh nrlolsrv2.sh"]
validate: list[str] = ["pidof nrlolsrv2"] validate = ["pidof nrlolsrv2"]
shutdown: list[str] = ["killall nrlolsrv2"] shutdown = ["killall nrlolsrv2"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
has_smf = "SMF" in self.node.config_services has_smf = "SMF" in self.node.config_services
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
return dict(has_smf=has_smf, ifnames=ifnames) return dict(has_smf=has_smf, ifnames=ifnames)
class OlsrOrg(ConfigService): class OlsrOrg(ConfigService):
name: str = "OLSRORG" name = "OLSRORG"
group: str = GROUP group = GROUP
directories: list[str] = ["/etc/olsrd"] directories = ["/etc/olsrd"]
files: list[str] = ["olsrd.sh", "/etc/olsrd/olsrd.conf"] files = ["olsrd.sh", "/etc/olsrd/olsrd.conf"]
executables: list[str] = ["olsrd"] executables = ["olsrd"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash olsrd.sh"] startup = ["sh olsrd.sh"]
validate: list[str] = ["pidof olsrd"] validate = ["pidof olsrd"]
shutdown: list[str] = ["killall olsrd"] shutdown = ["killall olsrd"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
has_smf = "SMF" in self.node.config_services has_smf = "SMF" in self.node.config_services
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
return dict(has_smf=has_smf, ifnames=ifnames) return dict(has_smf=has_smf, ifnames=ifnames)
class MgenActor(ConfigService): class MgenActor(ConfigService):
name: str = "MgenActor" name = "MgenActor"
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = ["start_mgen_actor.sh"] files = ["start_mgen_actor.sh"]
executables: list[str] = ["mgen"] executables = ["mgen"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash start_mgen_actor.sh"] startup = ["sh start_mgen_actor.sh"]
validate: list[str] = ["pidof mgen"] validate = ["pidof mgen"]
shutdown: list[str] = ["killall mgen"] shutdown = ["killall mgen"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
class Arouted(ConfigService):
name = "arouted"
group = GROUP
directories = []
files = ["startarouted.sh"]
executables = ["arouted"]
dependencies = []
startup = ["sh startarouted.sh"]
validate = ["pidof arouted"]
shutdown = ["pkill arouted"]
validation_mode = ConfigServiceMode.BLOCKING
default_configs = []
modes = {}
def data(self) -> Dict[str, Any]:
ip4_prefix = None
for ifc in self.node.netifs():
if getattr(ifc, "control", False):
continue
if ip4_prefix:
continue
for a in ifc.addrlist:
a = a.split("/")[0]
if netaddr.valid_ipv4(a):
ip4_prefix = f"{a}/{24}"
break
return dict(ip4_prefix=ip4_prefix)

View file

@ -1,7 +1,7 @@
<% <%
ifaces = "-i " + " -i ".join(ifnames) interfaces = "-i " + " -i ".join(ifnames)
smf = "" smf = ""
if has_smf: if has_smf:
smf = "-flooding ecds -smfClient %s_smf" % node.name smf = "-flooding ecds -smfClient %s_smf" % node.name
%> %>
nrlnhdp -l /var/log/nrlnhdp.log -rpipe ${node.name}_nhdp ${smf} ${ifaces} nrlnhdp -l /var/log/nrlnhdp.log -rpipe ${node.name}_nhdp ${smf} ${interfaces}

View file

@ -1,7 +1,7 @@
<% <%
ifaces = "-i " + " -i ".join(ifnames) interfaces = "-i " + " -i ".join(ifnames)
smf = "" smf = ""
if has_smf: if has_smf:
smf = "-flooding ecds -smfClient %s_smf" % node.name smf = "-flooding ecds -smfClient %s_smf" % node.name
%> %>
nrlolsrv2 -l /var/log/nrlolsrv2.log -rpipe ${node.name}_olsrv2 -p olsr ${smf} ${ifaces} nrlolsrv2 -l /var/log/nrlolsrv2.log -rpipe ${node.name}_olsrv2 -p olsr ${smf} ${interfaces}

View file

@ -1,4 +1,4 @@
<% <%
ifaces = "-i " + " -i ".join(ifnames) interfaces = "-i " + " -i ".join(ifnames)
%> %>
olsrd ${ifaces} olsrd ${interfaces}

View file

@ -0,0 +1,15 @@
#!/bin/sh
for f in "/tmp/${node.name}_smf"; do
count=1
until [ -e "$f" ]; do
if [ $count -eq 10 ]; then
echo "ERROR: nrlmsf pipe not found: $f" >&2
exit 1
fi
sleep 0.1
count=$(($count + 1))
done
done
ip route add ${ip4_prefix} dev lo
arouted instance ${node.name}_smf tap ${node.name}_tap stability 10 2>&1 > /var/log/arouted.log &

View file

@ -1,5 +1,8 @@
<% <%
ifaces = ",".join(ifnames) interfaces = ",".join(ifnames)
arouted = ""
if has_arouted:
arouted = "tap %s_tap unicast %s push lo,%s resequence on" % (node.name, ip4_prefix, ifnames[0])
if has_nhdp: if has_nhdp:
flood = "ecds" flood = "ecds"
elif has_olsr: elif has_olsr:
@ -9,4 +12,4 @@
%> %>
#!/bin/sh #!/bin/sh
# auto-generated by NrlSmf service # auto-generated by NrlSmf service
nrlsmf instance ${node.name}_smf ${flood} ${ifaces} hash MD5 log /var/log/nrlsmf.log < /dev/null > /dev/null 2>&1 & nrlsmf instance ${node.name}_smf ${interfaces} ${arouted} ${flood} hash MD5 log /var/log/nrlsmf.log < /dev/null > /dev/null 2>&1 &

View file

@ -1,58 +1,46 @@
import abc import abc
import logging import logging
from typing import Any from typing import Any, Dict
from core.config import Configuration import netaddr
from core import constants
from core.configservice.base import ConfigService, ConfigServiceMode from core.configservice.base import ConfigService, ConfigServiceMode
from core.emane.nodes import EmaneNet from core.emane.nodes import EmaneNet
from core.nodes.base import CoreNodeBase, NodeBase from core.nodes.base import CoreNodeBase
from core.nodes.interface import DEFAULT_MTU, CoreInterface from core.nodes.interface import CoreInterface
from core.nodes.network import PtpNet, WlanNode from core.nodes.network import WlanNode
from core.nodes.physical import Rj45Node
from core.nodes.wireless import WirelessNode
logger = logging.getLogger(__name__) GROUP = "Quagga"
GROUP: str = "Quagga"
QUAGGA_STATE_DIR: str = "/var/run/quagga"
def is_wireless(node: NodeBase) -> bool: def has_mtu_mismatch(ifc: CoreInterface) -> bool:
"""
Check if the node is a wireless type node.
:param node: node to check type for
:return: True if wireless type, False otherwise
"""
return isinstance(node, (WlanNode, EmaneNet, WirelessNode))
def has_mtu_mismatch(iface: CoreInterface) -> bool:
""" """
Helper to detect MTU mismatch and add the appropriate OSPF Helper to detect MTU mismatch and add the appropriate OSPF
mtu-ignore command. This is needed when e.g. a node is linked via a mtu-ignore command. This is needed when e.g. a node is linked via a
GreTap device. GreTap device.
""" """
if iface.mtu != DEFAULT_MTU: if ifc.mtu != 1500:
return True return True
if not iface.net: if not ifc.net:
return False return False
for iface in iface.net.get_ifaces(): for i in ifc.net.netifs():
if iface.mtu != iface.mtu: if i.mtu != ifc.mtu:
return True return True
return False return False
def get_min_mtu(iface: CoreInterface): def get_min_mtu(ifc):
""" """
Helper to discover the minimum MTU of interfaces linked with the Helper to discover the minimum MTU of interfaces linked with the
given interface. given interface.
""" """
mtu = iface.mtu mtu = ifc.mtu
if not iface.net: if not ifc.net:
return mtu return mtu
for iface in iface.net.get_ifaces(): for i in ifc.net.netifs():
if iface.mtu < mtu: if i.mtu < mtu:
mtu = iface.mtu mtu = i.mtu
return mtu return mtu
@ -60,53 +48,42 @@ def get_router_id(node: CoreNodeBase) -> str:
""" """
Helper to return the first IPv4 address of a node as its router ID. Helper to return the first IPv4 address of a node as its router ID.
""" """
for iface in node.get_ifaces(control=False): for ifc in node.netifs():
ip4 = iface.get_ip4() if getattr(ifc, "control", False):
if ip4: continue
return str(ip4.ip) for a in ifc.addrlist:
a = a.split("/")[0]
if netaddr.valid_ipv4(a):
return a
return "0.0.0.0" return "0.0.0.0"
def rj45_check(iface: CoreInterface) -> bool:
"""
Helper to detect whether interface is connected an external RJ45
link.
"""
if iface.net:
for peer_iface in iface.net.get_ifaces():
if peer_iface == iface:
continue
if isinstance(peer_iface.node, Rj45Node):
return True
return False
class Zebra(ConfigService): class Zebra(ConfigService):
name: str = "zebra" name = "zebra"
group: str = GROUP group = GROUP
directories: list[str] = ["/usr/local/etc/quagga", "/var/run/quagga"] directories = ["/usr/local/etc/quagga", "/var/run/quagga"]
files: list[str] = [ files = [
"/usr/local/etc/quagga/Quagga.conf", "/usr/local/etc/quagga/Quagga.conf",
"quaggaboot.sh", "quaggaboot.sh",
"/usr/local/etc/quagga/vtysh.conf", "/usr/local/etc/quagga/vtysh.conf",
] ]
executables: list[str] = ["zebra"] executables = ["zebra"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash quaggaboot.sh zebra"] startup = ["sh quaggaboot.sh zebra"]
validate: list[str] = ["pidof zebra"] validate = ["pidof zebra"]
shutdown: list[str] = ["killall zebra"] shutdown = ["killall zebra"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
quagga_bin_search = self.node.session.options.get( quagga_bin_search = self.node.session.options.get_config(
"quagga_bin_search", default="/usr/local/bin /usr/bin /usr/lib/quagga" "quagga_bin_search", default="/usr/local/bin /usr/bin /usr/lib/quagga"
).strip('"') ).strip('"')
quagga_sbin_search = self.node.session.options.get( quagga_sbin_search = self.node.session.options.get_config(
"quagga_sbin_search", default="/usr/local/sbin /usr/sbin /usr/lib/quagga" "quagga_sbin_search", default="/usr/local/sbin /usr/sbin /usr/lib/quagga"
).strip('"') ).strip('"')
quagga_state_dir = QUAGGA_STATE_DIR quagga_state_dir = constants.QUAGGA_STATE_DIR
quagga_conf = self.files[0] quagga_conf = self.files[0]
services = [] services = []
@ -115,36 +92,31 @@ class Zebra(ConfigService):
for service in self.node.config_services.values(): for service in self.node.config_services.values():
if self.name not in service.dependencies: if self.name not in service.dependencies:
continue continue
if not isinstance(service, QuaggaService):
continue
if service.ipv4_routing: if service.ipv4_routing:
want_ip4 = True want_ip4 = True
if service.ipv6_routing: if service.ipv6_routing:
want_ip6 = True want_ip6 = True
services.append(service) services.append(service)
ifaces = [] interfaces = []
for iface in self.node.get_ifaces(): for ifc in self.node.netifs():
ip4s = [] ip4s = []
ip6s = [] ip6s = []
for ip4 in iface.ip4s: for x in ifc.addrlist:
ip4s.append(str(ip4)) addr = x.split("/")[0]
for ip6 in iface.ip6s: if netaddr.valid_ipv4(addr):
ip6s.append(str(ip6)) ip4s.append(x)
configs = [] else:
if not iface.control: ip6s.append(x)
for service in services: is_control = getattr(ifc, "control", False)
config = service.quagga_iface_config(iface) interfaces.append((ifc, ip4s, ip6s, is_control))
if config:
configs.append(config.split("\n"))
ifaces.append((iface, ip4s, ip6s, configs))
return dict( return dict(
quagga_bin_search=quagga_bin_search, quagga_bin_search=quagga_bin_search,
quagga_sbin_search=quagga_sbin_search, quagga_sbin_search=quagga_sbin_search,
quagga_state_dir=quagga_state_dir, quagga_state_dir=quagga_state_dir,
quagga_conf=quagga_conf, quagga_conf=quagga_conf,
ifaces=ifaces, interfaces=interfaces,
want_ip4=want_ip4, want_ip4=want_ip4,
want_ip6=want_ip6, want_ip6=want_ip6,
services=services, services=services,
@ -152,22 +124,22 @@ class Zebra(ConfigService):
class QuaggaService(abc.ABC): class QuaggaService(abc.ABC):
group: str = GROUP group = GROUP
directories: list[str] = [] directories = []
files: list[str] = [] files = []
executables: list[str] = [] executables = []
dependencies: list[str] = ["zebra"] dependencies = ["zebra"]
startup: list[str] = [] startup = []
validate: list[str] = [] validate = []
shutdown: list[str] = [] shutdown = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
ipv4_routing: bool = False ipv4_routing = False
ipv6_routing: bool = False ipv6_routing = False
@abc.abstractmethod @abc.abstractmethod
def quagga_iface_config(self, iface: CoreInterface) -> str: def quagga_interface_config(self, ifc: CoreInterface) -> str:
raise NotImplementedError raise NotImplementedError
@abc.abstractmethod @abc.abstractmethod
@ -182,38 +154,27 @@ class Ospfv2(QuaggaService, ConfigService):
unified Quagga.conf file. unified Quagga.conf file.
""" """
name: str = "OSPFv2" name = "OSPFv2"
validate: list[str] = ["pidof ospfd"] validate = ["pidof ospfd"]
shutdown: list[str] = ["killall ospfd"] shutdown = ["killall ospfd"]
ipv4_routing: bool = True ipv4_routing = True
def quagga_iface_config(self, iface: CoreInterface) -> str: def quagga_interface_config(self, ifc: CoreInterface) -> str:
has_mtu = has_mtu_mismatch(iface) if has_mtu_mismatch(ifc):
has_rj45 = rj45_check(iface) return "ip ospf mtu-ignore"
is_ptp = isinstance(iface.net, PtpNet) else:
data = dict(has_mtu=has_mtu, is_ptp=is_ptp, has_rj45=has_rj45) return ""
text = """
% if has_mtu:
ip ospf mtu-ignore
% endif
% if has_rj45:
<% return STOP_RENDERING %>
% endif
% if is_ptp:
ip ospf network point-to-point
% endif
ip ospf hello-interval 2
ip ospf dead-interval 6
ip ospf retransmit-interval 5
"""
return self.render_text(text, data)
def quagga_config(self) -> str: def quagga_config(self) -> str:
router_id = get_router_id(self.node) router_id = get_router_id(self.node)
addresses = [] addresses = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
for ip4 in iface.ip4s: if getattr(ifc, "control", False):
addresses.append(str(ip4)) continue
for a in ifc.addrlist:
addr = a.split("/")[0]
if netaddr.valid_ipv4(addr):
addresses.append(a)
data = dict(router_id=router_id, addresses=addresses) data = dict(router_id=router_id, addresses=addresses)
text = """ text = """
router ospf router ospf
@ -233,15 +194,15 @@ class Ospfv3(QuaggaService, ConfigService):
unified Quagga.conf file. unified Quagga.conf file.
""" """
name: str = "OSPFv3" name = "OSPFv3"
shutdown: list[str] = ["killall ospf6d"] shutdown = ("killall ospf6d",)
validate: list[str] = ["pidof ospf6d"] validate = ("pidof ospf6d",)
ipv4_routing: bool = True ipv4_routing = True
ipv6_routing: bool = True ipv6_routing = True
def quagga_iface_config(self, iface: CoreInterface) -> str: def quagga_interface_config(self, ifc: CoreInterface) -> str:
mtu = get_min_mtu(iface) mtu = get_min_mtu(ifc)
if mtu < iface.mtu: if mtu < ifc.mtu:
return f"ipv6 ospf6 ifmtu {mtu}" return f"ipv6 ospf6 ifmtu {mtu}"
else: else:
return "" return ""
@ -249,8 +210,10 @@ class Ospfv3(QuaggaService, ConfigService):
def quagga_config(self) -> str: def quagga_config(self) -> str:
router_id = get_router_id(self.node) router_id = get_router_id(self.node)
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
data = dict(router_id=router_id, ifnames=ifnames) data = dict(router_id=router_id, ifnames=ifnames)
text = """ text = """
router ospf6 router ospf6
@ -272,11 +235,17 @@ class Ospfv3mdr(Ospfv3):
unified Quagga.conf file. unified Quagga.conf file.
""" """
name: str = "OSPFv3MDR" name = "OSPFv3MDR"
def quagga_iface_config(self, iface: CoreInterface) -> str: def data(self) -> Dict[str, Any]:
config = super().quagga_iface_config(iface) for ifc in self.node.netifs():
if is_wireless(iface.net): is_wireless = isinstance(ifc.net, (WlanNode, EmaneNet))
logging.info("MDR wireless: %s", is_wireless)
return dict()
def quagga_interface_config(self, ifc: CoreInterface) -> str:
config = super().quagga_interface_config(ifc)
if isinstance(ifc.net, (WlanNode, EmaneNet)):
config = self.clean_text( config = self.clean_text(
f""" f"""
{config} {config}
@ -299,13 +268,16 @@ class Bgp(QuaggaService, ConfigService):
having the same AS number. having the same AS number.
""" """
name: str = "BGP" name = "BGP"
shutdown: list[str] = ["killall bgpd"] shutdown = ["killall bgpd"]
validate: list[str] = ["pidof bgpd"] validate = ["pidof bgpd"]
ipv4_routing: bool = True ipv4_routing = True
ipv6_routing: bool = True ipv6_routing = True
def quagga_config(self) -> str: def quagga_config(self) -> str:
return ""
def quagga_interface_config(self, ifc: CoreInterface) -> str:
router_id = get_router_id(self.node) router_id = get_router_id(self.node)
text = f""" text = f"""
! BGP configuration ! BGP configuration
@ -319,19 +291,16 @@ class Bgp(QuaggaService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def quagga_iface_config(self, iface: CoreInterface) -> str:
return ""
class Rip(QuaggaService, ConfigService): class Rip(QuaggaService, ConfigService):
""" """
The RIP service provides IPv4 routing for wired networks. The RIP service provides IPv4 routing for wired networks.
""" """
name: str = "RIP" name = "RIP"
shutdown: list[str] = ["killall ripd"] shutdown = ["killall ripd"]
validate: list[str] = ["pidof ripd"] validate = ["pidof ripd"]
ipv4_routing: bool = True ipv4_routing = True
def quagga_config(self) -> str: def quagga_config(self) -> str:
text = """ text = """
@ -344,7 +313,7 @@ class Rip(QuaggaService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def quagga_iface_config(self, iface: CoreInterface) -> str: def quagga_interface_config(self, ifc: CoreInterface) -> str:
return "" return ""
@ -353,10 +322,10 @@ class Ripng(QuaggaService, ConfigService):
The RIP NG service provides IPv6 routing for wired networks. The RIP NG service provides IPv6 routing for wired networks.
""" """
name: str = "RIPNG" name = "RIPNG"
shutdown: list[str] = ["killall ripngd"] shutdown = ["killall ripngd"]
validate: list[str] = ["pidof ripngd"] validate = ["pidof ripngd"]
ipv6_routing: bool = True ipv6_routing = True
def quagga_config(self) -> str: def quagga_config(self) -> str:
text = """ text = """
@ -369,7 +338,7 @@ class Ripng(QuaggaService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def quagga_iface_config(self, iface: CoreInterface) -> str: def quagga_interface_config(self, ifc: CoreInterface) -> str:
return "" return ""
@ -379,15 +348,17 @@ class Babel(QuaggaService, ConfigService):
protocol for IPv6 and IPv4 with fast convergence properties. protocol for IPv6 and IPv4 with fast convergence properties.
""" """
name: str = "Babel" name = "Babel"
shutdown: list[str] = ["killall babeld"] shutdown = ["killall babeld"]
validate: list[str] = ["pidof babeld"] validate = ["pidof babeld"]
ipv6_routing: bool = True ipv6_routing = True
def quagga_config(self) -> str: def quagga_config(self) -> str:
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
text = """ text = """
router babel router babel
% for ifname in ifnames: % for ifname in ifnames:
@ -400,8 +371,8 @@ class Babel(QuaggaService, ConfigService):
data = dict(ifnames=ifnames) data = dict(ifnames=ifnames)
return self.render_text(text, data) return self.render_text(text, data)
def quagga_iface_config(self, iface: CoreInterface) -> str: def quagga_interface_config(self, ifc: CoreInterface) -> str:
if is_wireless(iface.net): if isinstance(ifc.net, (WlanNode, EmaneNet)):
text = """ text = """
babel wireless babel wireless
no babel split-horizon no babel split-horizon
@ -419,16 +390,16 @@ class Xpimd(QuaggaService, ConfigService):
PIM multicast routing based on XORP. PIM multicast routing based on XORP.
""" """
name: str = "Xpimd" name = "Xpimd"
shutdown: list[str] = ["killall xpimd"] shutdown = ["killall xpimd"]
validate: list[str] = ["pidof xpimd"] validate = ["pidof xpimd"]
ipv4_routing: bool = True ipv4_routing = True
def quagga_config(self) -> str: def quagga_config(self) -> str:
ifname = "eth0" ifname = "eth0"
for iface in self.node.get_ifaces(): for ifc in self.node.netifs():
if iface.name != "lo": if ifc.name != "lo":
ifname = iface.name ifname = ifc.name
break break
text = f""" text = f"""
@ -445,7 +416,7 @@ class Xpimd(QuaggaService, ConfigService):
""" """
return self.clean_text(text) return self.clean_text(text)
def quagga_iface_config(self, iface: CoreInterface) -> str: def quagga_interface_config(self, ifc: CoreInterface) -> str:
text = """ text = """
ip mfea ip mfea
ip pim ip pim

View file

@ -1,5 +1,5 @@
% for iface, ip4s, ip6s, configs in ifaces: % for ifc, ip4s, ip6s, is_control in interfaces:
interface ${iface.name} interface ${ifc.name}
% if want_ip4: % if want_ip4:
% for addr in ip4s: % for addr in ip4s:
ip address ${addr} ip address ${addr}
@ -10,11 +10,13 @@ interface ${iface.name}
ipv6 address ${addr} ipv6 address ${addr}
% endfor % endfor
% endif % endif
% for config in configs: % if not is_control:
% for line in config: % for service in services:
% for line in service.quagga_interface_config(ifc).split("\n"):
${line} ${line}
% endfor
% endfor % endfor
% endfor % endif
! !
% endfor % endfor

View file

@ -1,104 +0,0 @@
from typing import Any
from core.config import ConfigString, Configuration
from core.configservice.base import ConfigService, ConfigServiceMode
GROUP_NAME: str = "Security"
class VpnClient(ConfigService):
name: str = "VPNClient"
group: str = GROUP_NAME
directories: list[str] = []
files: list[str] = ["vpnclient.sh"]
executables: list[str] = ["openvpn", "ip", "killall"]
dependencies: list[str] = []
startup: list[str] = ["bash vpnclient.sh"]
validate: list[str] = ["pidof openvpn"]
shutdown: list[str] = ["killall openvpn"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [
ConfigString(id="keydir", label="Key Dir", default="/etc/core/keys"),
ConfigString(id="keyname", label="Key Name", default="client1"),
ConfigString(id="server", label="Server", default="10.0.2.10"),
]
modes: dict[str, dict[str, str]] = {}
class VpnServer(ConfigService):
name: str = "VPNServer"
group: str = GROUP_NAME
directories: list[str] = []
files: list[str] = ["vpnserver.sh"]
executables: list[str] = ["openvpn", "ip", "killall"]
dependencies: list[str] = []
startup: list[str] = ["bash vpnserver.sh"]
validate: list[str] = ["pidof openvpn"]
shutdown: list[str] = ["killall openvpn"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [
ConfigString(id="keydir", label="Key Dir", default="/etc/core/keys"),
ConfigString(id="keyname", label="Key Name", default="server"),
ConfigString(id="subnet", label="Subnet", default="10.0.200.0"),
]
modes: dict[str, dict[str, str]] = {}
def data(self) -> dict[str, Any]:
address = None
for iface in self.node.get_ifaces(control=False):
ip4 = iface.get_ip4()
if ip4:
address = str(ip4.ip)
break
return dict(address=address)
class IPsec(ConfigService):
name: str = "IPsec"
group: str = GROUP_NAME
directories: list[str] = []
files: list[str] = ["ipsec.sh"]
executables: list[str] = ["racoon", "ip", "setkey", "killall"]
dependencies: list[str] = []
startup: list[str] = ["bash ipsec.sh"]
validate: list[str] = ["pidof racoon"]
shutdown: list[str] = ["killall racoon"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = []
modes: dict[str, dict[str, str]] = {}
class Firewall(ConfigService):
name: str = "Firewall"
group: str = GROUP_NAME
directories: list[str] = []
files: list[str] = ["firewall.sh"]
executables: list[str] = ["iptables"]
dependencies: list[str] = []
startup: list[str] = ["bash firewall.sh"]
validate: list[str] = []
shutdown: list[str] = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = []
modes: dict[str, dict[str, str]] = {}
class Nat(ConfigService):
name: str = "NAT"
group: str = GROUP_NAME
directories: list[str] = []
files: list[str] = ["nat.sh"]
executables: list[str] = ["iptables"]
dependencies: list[str] = []
startup: list[str] = ["bash nat.sh"]
validate: list[str] = []
shutdown: list[str] = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = []
modes: dict[str, dict[str, str]] = {}
def data(self) -> dict[str, Any]:
ifnames = []
for iface in self.node.get_ifaces(control=False):
ifnames.append(iface.name)
return dict(ifnames=ifnames)

View file

@ -0,0 +1,141 @@
from typing import Any, Dict
import netaddr
from core.config import Configuration
from core.configservice.base import ConfigService, ConfigServiceMode
from core.emulator.enumerations import ConfigDataTypes
GROUP_NAME = "Security"
class VpnClient(ConfigService):
name = "VPNClient"
group = GROUP_NAME
directories = []
files = ["vpnclient.sh"]
executables = ["openvpn", "ip", "killall"]
dependencies = []
startup = ["sh vpnclient.sh"]
validate = ["pidof openvpn"]
shutdown = ["killall openvpn"]
validation_mode = ConfigServiceMode.BLOCKING
default_configs = [
Configuration(
_id="keydir",
_type=ConfigDataTypes.STRING,
label="Key Dir",
default="/etc/core/keys",
),
Configuration(
_id="keyname",
_type=ConfigDataTypes.STRING,
label="Key Name",
default="client1",
),
Configuration(
_id="server",
_type=ConfigDataTypes.STRING,
label="Server",
default="10.0.2.10",
),
]
modes = {}
class VpnServer(ConfigService):
name = "VPNServer"
group = GROUP_NAME
directories = []
files = ["vpnserver.sh"]
executables = ["openvpn", "ip", "killall"]
dependencies = []
startup = ["sh vpnserver.sh"]
validate = ["pidof openvpn"]
shutdown = ["killall openvpn"]
validation_mode = ConfigServiceMode.BLOCKING
default_configs = [
Configuration(
_id="keydir",
_type=ConfigDataTypes.STRING,
label="Key Dir",
default="/etc/core/keys",
),
Configuration(
_id="keyname",
_type=ConfigDataTypes.STRING,
label="Key Name",
default="server",
),
Configuration(
_id="subnet",
_type=ConfigDataTypes.STRING,
label="Subnet",
default="10.0.200.0",
),
]
modes = {}
def data(self) -> Dict[str, Any]:
address = None
for ifc in self.node.netifs():
if getattr(ifc, "control", False):
continue
for x in ifc.addrlist:
addr = x.split("/")[0]
if netaddr.valid_ipv4(addr):
address = addr
return dict(address=address)
class IPsec(ConfigService):
name = "IPsec"
group = GROUP_NAME
directories = []
files = ["ipsec.sh"]
executables = ["racoon", "ip", "setkey", "killall"]
dependencies = []
startup = ["sh ipsec.sh"]
validate = ["pidof racoon"]
shutdown = ["killall racoon"]
validation_mode = ConfigServiceMode.BLOCKING
default_configs = []
modes = {}
class Firewall(ConfigService):
name = "Firewall"
group = GROUP_NAME
directories = []
files = ["firewall.sh"]
executables = ["iptables"]
dependencies = []
startup = ["sh firewall.sh"]
validate = []
shutdown = []
validation_mode = ConfigServiceMode.BLOCKING
default_configs = []
modes = {}
class Nat(ConfigService):
name = "NAT"
group = GROUP_NAME
directories = []
files = ["nat.sh"]
executables = ["iptables"]
dependencies = []
startup = ["sh nat.sh"]
validate = []
shutdown = []
validation_mode = ConfigServiceMode.BLOCKING
default_configs = []
modes = {}
def data(self) -> Dict[str, Any]:
ifnames = []
for ifc in self.node.netifs():
if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
return dict(ifnames=ifnames)

View file

@ -0,0 +1,47 @@
from core.config import Configuration
from core.configservice.base import ConfigService, ConfigServiceMode
from core.emulator.enumerations import ConfigDataTypes
class SimpleService(ConfigService):
name = "Simple"
group = "SimpleGroup"
directories = ["/etc/quagga", "/usr/local/lib"]
files = ["test1.sh", "test2.sh"]
executables = []
dependencies = []
startup = []
validate = []
shutdown = []
validation_mode = ConfigServiceMode.BLOCKING
default_configs = [
Configuration(_id="value1", _type=ConfigDataTypes.STRING, label="Text"),
Configuration(_id="value2", _type=ConfigDataTypes.BOOL, label="Boolean"),
Configuration(
_id="value3",
_type=ConfigDataTypes.STRING,
label="Multiple Choice",
options=["value1", "value2", "value3"],
),
]
modes = {
"mode1": {"value1": "value1", "value2": "0", "value3": "value2"},
"mode2": {"value1": "value2", "value2": "1", "value3": "value3"},
"mode3": {"value1": "value3", "value2": "0", "value3": "value1"},
}
def get_text_template(self, name: str) -> str:
if name == "test1.sh":
return """
# sample script 1
# node id(${node.id}) name(${node.name})
# config: ${config}
echo hello
"""
elif name == "test2.sh":
return """
# sample script 2
# node id(${node.id}) name(${node.name})
# config: ${config}
echo hello2
"""

View file

@ -1,36 +1,35 @@
from typing import Any from typing import Any, Dict
import netaddr import netaddr
from core import utils from core import utils
from core.config import Configuration
from core.configservice.base import ConfigService, ConfigServiceMode from core.configservice.base import ConfigService, ConfigServiceMode
GROUP_NAME = "Utility" GROUP_NAME = "Utility"
class DefaultRouteService(ConfigService): class DefaultRouteService(ConfigService):
name: str = "DefaultRoute" name = "DefaultRoute"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = [] directories = []
files: list[str] = ["defaultroute.sh"] files = ["defaultroute.sh"]
executables: list[str] = ["ip"] executables = ["ip"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash defaultroute.sh"] startup = ["sh defaultroute.sh"]
validate: list[str] = [] validate = []
shutdown: list[str] = [] shutdown = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
# only add default routes for linked routing nodes # only add default routes for linked routing nodes
routes = [] routes = []
ifaces = self.node.get_ifaces() netifs = self.node.netifs(sort=True)
if ifaces: if netifs:
iface = ifaces[0] netif = netifs[0]
for ip in iface.ips(): for x in netif.addrlist:
net = ip.cidr net = netaddr.IPNetwork(x).cidr
if net.size > 1: if net.size > 1:
router = net[1] router = net[1]
routes.append(str(router)) routes.append(str(router))
@ -38,92 +37,97 @@ class DefaultRouteService(ConfigService):
class DefaultMulticastRouteService(ConfigService): class DefaultMulticastRouteService(ConfigService):
name: str = "DefaultMulticastRoute" name = "DefaultMulticastRoute"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = [] directories = []
files: list[str] = ["defaultmroute.sh"] files = ["defaultmroute.sh"]
executables: list[str] = [] executables = []
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash defaultmroute.sh"] startup = ["sh defaultmroute.sh"]
validate: list[str] = [] validate = []
shutdown: list[str] = [] shutdown = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
ifname = None ifname = None
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifname = iface.name if getattr(ifc, "control", False):
continue
ifname = ifc.name
break break
return dict(ifname=ifname) return dict(ifname=ifname)
class StaticRouteService(ConfigService): class StaticRouteService(ConfigService):
name: str = "StaticRoute" name = "StaticRoute"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = [] directories = []
files: list[str] = ["staticroute.sh"] files = ["staticroute.sh"]
executables: list[str] = [] executables = []
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash staticroute.sh"] startup = ["sh staticroute.sh"]
validate: list[str] = [] validate = []
shutdown: list[str] = [] shutdown = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
routes = [] routes = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
for ip in iface.ips(): if getattr(ifc, "control", False):
address = str(ip.ip) continue
if netaddr.valid_ipv6(address): for x in ifc.addrlist:
addr = x.split("/")[0]
if netaddr.valid_ipv6(addr):
dst = "3ffe:4::/64" dst = "3ffe:4::/64"
else: else:
dst = "10.9.8.0/24" dst = "10.9.8.0/24"
if ip[-2] != ip[1]: net = netaddr.IPNetwork(x)
routes.append((dst, ip[1])) if net[-2] != net[1]:
routes.append((dst, net[1]))
return dict(routes=routes) return dict(routes=routes)
class IpForwardService(ConfigService): class IpForwardService(ConfigService):
name: str = "IPForward" name = "IPForward"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = [] directories = []
files: list[str] = ["ipforward.sh"] files = ["ipforward.sh"]
executables: list[str] = ["sysctl"] executables = ["sysctl"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash ipforward.sh"] startup = ["sh ipforward.sh"]
validate: list[str] = [] validate = []
shutdown: list[str] = [] shutdown = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
devnames = [] devnames = []
for iface in self.node.get_ifaces(): for ifc in self.node.netifs():
devname = utils.sysctl_devname(iface.name) devname = utils.sysctl_devname(ifc.name)
devnames.append(devname) devnames.append(devname)
return dict(devnames=devnames) return dict(devnames=devnames)
class SshService(ConfigService): class SshService(ConfigService):
name: str = "SSH" name = "SSH"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = ["/etc/ssh", "/var/run/sshd"] directories = ["/etc/ssh", "/var/run/sshd"]
files: list[str] = ["startsshd.sh", "/etc/ssh/sshd_config"] files = ["startsshd.sh", "/etc/ssh/sshd_config"]
executables: list[str] = ["sshd"] executables = ["sshd"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash startsshd.sh"] startup = ["sh startsshd.sh"]
validate: list[str] = [] validate = []
shutdown: list[str] = ["killall sshd"] shutdown = ["killall sshd"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
return dict( return dict(
sshcfgdir=self.directories[0], sshcfgdir=self.directories[0],
sshstatedir=self.directories[1], sshstatedir=self.directories[1],
@ -132,137 +136,146 @@ class SshService(ConfigService):
class DhcpService(ConfigService): class DhcpService(ConfigService):
name: str = "DHCP" name = "DHCP"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = ["/etc/dhcp", "/var/lib/dhcp"] directories = ["/etc/dhcp", "/var/lib/dhcp"]
files: list[str] = ["/etc/dhcp/dhcpd.conf"] files = ["/etc/dhcp/dhcpd.conf"]
executables: list[str] = ["dhcpd"] executables = ["dhcpd"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["touch /var/lib/dhcp/dhcpd.leases", "dhcpd"] startup = ["touch /var/lib/dhcp/dhcpd.leases", "dhcpd"]
validate: list[str] = ["pidof dhcpd"] validate = ["pidof dhcpd"]
shutdown: list[str] = ["killall dhcpd"] shutdown = ["killall dhcpd"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
subnets = [] subnets = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
for ip4 in iface.ip4s: if getattr(ifc, "control", False):
if ip4.size == 1: continue
continue for x in ifc.addrlist:
# divide the address space in half addr = x.split("/")[0]
index = (ip4.size - 2) / 2 if netaddr.valid_ipv4(addr):
rangelow = ip4[index] net = netaddr.IPNetwork(x)
rangehigh = ip4[-2] # divide the address space in half
subnets.append((ip4.cidr.ip, ip4.netmask, rangelow, rangehigh, ip4.ip)) index = (net.size - 2) / 2
rangelow = net[index]
rangehigh = net[-2]
subnets.append((net.ip, net.netmask, rangelow, rangehigh, addr))
return dict(subnets=subnets) return dict(subnets=subnets)
class DhcpClientService(ConfigService): class DhcpClientService(ConfigService):
name: str = "DHCPClient" name = "DHCPClient"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = [] directories = []
files: list[str] = ["startdhcpclient.sh"] files = ["startdhcpclient.sh"]
executables: list[str] = ["dhclient"] executables = ["dhclient"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash startdhcpclient.sh"] startup = ["sh startdhcpclient.sh"]
validate: list[str] = ["pidof dhclient"] validate = ["pidof dhclient"]
shutdown: list[str] = ["killall dhclient"] shutdown = ["killall dhclient"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
continue
ifnames.append(ifc.name)
return dict(ifnames=ifnames) return dict(ifnames=ifnames)
class FtpService(ConfigService): class FtpService(ConfigService):
name: str = "FTP" name = "FTP"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = ["/var/run/vsftpd/empty", "/var/ftp"] directories = ["/var/run/vsftpd/empty", "/var/ftp"]
files: list[str] = ["vsftpd.conf"] files = ["vsftpd.conf"]
executables: list[str] = ["vsftpd"] executables = ["vsftpd"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["vsftpd ./vsftpd.conf"] startup = ["vsftpd ./vsftpd.conf"]
validate: list[str] = ["pidof vsftpd"] validate = ["pidof vsftpd"]
shutdown: list[str] = ["killall vsftpd"] shutdown = ["killall vsftpd"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
class PcapService(ConfigService): class PcapService(ConfigService):
name: str = "pcap" name = "pcap"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = [] directories = []
files: list[str] = ["pcap.sh"] files = ["pcap.sh"]
executables: list[str] = ["tcpdump"] executables = ["tcpdump"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash pcap.sh start"] startup = ["sh pcap.sh start"]
validate: list[str] = ["pidof tcpdump"] validate = ["pidof tcpdump"]
shutdown: list[str] = ["bash pcap.sh stop"] shutdown = ["sh pcap.sh stop"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
ifnames = [] ifnames = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifnames.append(iface.name) if getattr(ifc, "control", False):
return dict(ifnames=ifnames) continue
ifnames.append(ifc.name)
return dict()
class RadvdService(ConfigService): class RadvdService(ConfigService):
name: str = "radvd" name = "radvd"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = ["/etc/radvd", "/var/run/radvd"] directories = ["/etc/radvd"]
files: list[str] = ["/etc/radvd/radvd.conf"] files = ["/etc/radvd/radvd.conf"]
executables: list[str] = ["radvd"] executables = ["radvd"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = [ startup = ["radvd -C /etc/radvd/radvd.conf -m logfile -l /var/log/radvd.log"]
"radvd -C /etc/radvd/radvd.conf -m logfile -l /var/log/radvd.log" validate = ["pidof radvd"]
] shutdown = ["pkill radvd"]
validate: list[str] = ["pidof radvd"] validation_mode = ConfigServiceMode.BLOCKING
shutdown: list[str] = ["pkill radvd"] default_configs = []
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING modes = {}
default_configs: list[Configuration] = []
modes: dict[str, dict[str, str]] = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
ifaces = [] interfaces = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
if getattr(ifc, "control", False):
continue
prefixes = [] prefixes = []
for ip6 in iface.ip6s: for x in ifc.addrlist:
prefixes.append(str(ip6)) addr = x.split("/")[0]
if netaddr.valid_ipv6(addr):
prefixes.append(x)
if not prefixes: if not prefixes:
continue continue
ifaces.append((iface.name, prefixes)) interfaces.append((ifc.name, prefixes))
return dict(ifaces=ifaces) return dict(interfaces=interfaces)
class AtdService(ConfigService): class AtdService(ConfigService):
name: str = "atd" name = "atd"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = ["/var/spool/cron/atjobs", "/var/spool/cron/atspool"] directories = ["/var/spool/cron/atjobs", "/var/spool/cron/atspool"]
files: list[str] = ["startatd.sh"] files = ["startatd.sh"]
executables: list[str] = ["atd"] executables = ["atd"]
dependencies: list[str] = [] dependencies = []
startup: list[str] = ["bash startatd.sh"] startup = ["sh startatd.sh"]
validate: list[str] = ["pidof atd"] validate = ["pidof atd"]
shutdown: list[str] = ["pkill atd"] shutdown = ["pkill atd"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING validation_mode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = [] default_configs = []
modes: dict[str, dict[str, str]] = {} modes = {}
class HttpService(ConfigService): class HttpService(ConfigService):
name: str = "HTTP" name = "HTTP"
group: str = GROUP_NAME group = GROUP_NAME
directories: list[str] = [ directories = [
"/etc/apache2", "/etc/apache2",
"/var/run/apache2", "/var/run/apache2",
"/var/log/apache2", "/var/log/apache2",
@ -270,22 +283,20 @@ class HttpService(ConfigService):
"/var/lock/apache2", "/var/lock/apache2",
"/var/www", "/var/www",
] ]
files: list[str] = [ files = ["/etc/apache2/apache2.conf", "/etc/apache2/envvars", "/var/www/index.html"]
"/etc/apache2/apache2.conf", executables = ["apache2ctl"]
"/etc/apache2/envvars", dependencies = []
"/var/www/index.html", startup = ["chown www-data /var/lock/apache2", "apache2ctl start"]
] validate = ["pidof apache2"]
executables: list[str] = ["apache2ctl"] shutdown = ["apache2ctl stop"]
dependencies: list[str] = [] validation_mode = ConfigServiceMode.BLOCKING
startup: list[str] = ["chown www-data /var/lock/apache2", "apache2ctl start"] default_configs = []
validate: list[str] = ["pidof apache2"] modes = {}
shutdown: list[str] = ["apache2ctl stop"]
validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING
default_configs: list[Configuration] = []
modes: dict[str, dict[str, str]] = {}
def data(self) -> dict[str, Any]: def data(self) -> Dict[str, Any]:
ifaces = [] interfaces = []
for iface in self.node.get_ifaces(control=False): for ifc in self.node.netifs():
ifaces.append(iface) if getattr(ifc, "control", False):
return dict(ifaces=ifaces) continue
interfaces.append(ifc)
return dict(interfaces=interfaces)

View file

@ -5,8 +5,8 @@
<p>This is the default web page for this server.</p> <p>This is the default web page for this server.</p>
<p>The web server software is running but no content has been added, yet.</p> <p>The web server software is running but no content has been added, yet.</p>
<ul> <ul>
% for iface in ifaces: % for ifc in interfaces:
<li>${iface.name} - ${iface.addrlist}</li> <li>${ifc.name} - ${ifc.addrlist}</li>
% endfor % endfor
</ul> </ul>
</body> </body>

View file

@ -13,5 +13,4 @@ sysctl -w net.ipv4.conf.default.rp_filter=0
sysctl -w net.ipv4.conf.${devname}.forwarding=1 sysctl -w net.ipv4.conf.${devname}.forwarding=1
sysctl -w net.ipv4.conf.${devname}.send_redirects=0 sysctl -w net.ipv4.conf.${devname}.send_redirects=0
sysctl -w net.ipv4.conf.${devname}.rp_filter=0 sysctl -w net.ipv4.conf.${devname}.rp_filter=0
sysctl -w net.ipv6.conf.${devname}.forwarding=1
% endfor % endfor

View file

@ -3,7 +3,7 @@
# (-s snap length, -C limit pcap file length, -n disable name resolution) # (-s snap length, -C limit pcap file length, -n disable name resolution)
if [ "x$1" = "xstart" ]; then if [ "x$1" = "xstart" ]; then
% for ifname in ifnames: % for ifname in ifnames:
tcpdump -s 12288 -C 10 -n -w ${node.name}.${ifname}.pcap -i ${ifname} > /dev/null 2>&1 & tcpdump -s 12288 -C 10 -n -w ${node.name}.${ifname}.pcap -i ${ifname} < /dev/null &
% endfor % endfor
elif [ "x$1" = "xstop" ]; then elif [ "x$1" = "xstop" ]; then
mkdir -p $SESSION_DIR/pcap mkdir -p $SESSION_DIR/pcap

View file

@ -1,5 +1,5 @@
# auto-generated by RADVD service (utility.py) # auto-generated by RADVD service (utility.py)
% for ifname, prefixes in ifaces: % for ifname, prefixes in values:
interface ${ifname} interface ${ifname}
{ {
AdvSendAdvert on; AdvSendAdvert on;

View file

@ -1,5 +1,19 @@
from pathlib import Path from core.utils import which
COREDPY_VERSION: str = "@PACKAGE_VERSION@" COREDPY_VERSION = "@PACKAGE_VERSION@"
CORE_CONF_DIR: Path = Path("@CORE_CONF_DIR@") CORE_CONF_DIR = "@CORE_CONF_DIR@"
CORE_DATA_DIR: Path = Path("@CORE_DATA_DIR@") CORE_DATA_DIR = "@CORE_DATA_DIR@"
QUAGGA_STATE_DIR = "@CORE_STATE_DIR@/run/quagga"
FRR_STATE_DIR = "@CORE_STATE_DIR@/run/frr"
VNODED_BIN = which("vnoded", required=True)
VCMD_BIN = which("vcmd", required=True)
SYSCTL_BIN = which("sysctl", required=True)
IP_BIN = which("ip", required=True)
ETHTOOL_BIN = which("ethtool", required=True)
TC_BIN = which("tc", required=True)
EBTABLES_BIN = which("ebtables", required=True)
MOUNT_BIN = which("mount", required=True)
UMOUNT_BIN = which("umount", required=True)
OVS_BIN = which("ovs-vsctl", required=False)
OVS_FLOW_BIN = which("ovs-ofctl", required=False)

View file

@ -0,0 +1,34 @@
"""
EMANE Bypass model for CORE
"""
from core.config import Configuration
from core.emane import emanemodel
from core.emulator.enumerations import ConfigDataTypes
class EmaneBypassModel(emanemodel.EmaneModel):
name = "emane_bypass"
# values to ignore, when writing xml files
config_ignore = {"none"}
# mac definitions
mac_library = "bypassmaclayer"
mac_config = [
Configuration(
_id="none",
_type=ConfigDataTypes.BOOL,
default="0",
label="There are no parameters for the bypass model.",
)
]
# phy definitions
phy_library = "bypassphylayer"
phy_config = []
@classmethod
def load(cls, emane_prefix: str) -> None:
# ignore default logic
pass

View file

@ -3,26 +3,23 @@ commeffect.py: EMANE CommEffect model for CORE
""" """
import logging import logging
from pathlib import Path import os
from typing import Dict, List
from lxml import etree from lxml import etree
from core.config import ConfigGroup, Configuration from core.config import ConfigGroup, Configuration
from core.emane import emanemanifest, emanemodel from core.emane import emanemanifest, emanemodel
from core.emulator.data import LinkOptions
from core.nodes.interface import CoreInterface from core.nodes.interface import CoreInterface
from core.xml import emanexml from core.xml import emanexml
logger = logging.getLogger(__name__)
try: try:
from emane.events.commeffectevent import CommEffectEvent from emane.events.commeffectevent import CommEffectEvent
except ImportError: except ImportError:
try: try:
from emanesh.events.commeffectevent import CommEffectEvent from emanesh.events.commeffectevent import CommEffectEvent
except ImportError: except ImportError:
CommEffectEvent = None logging.debug("compatible emane python bindings not installed")
logger.debug("compatible emane python bindings not installed")
def convert_none(x: float) -> int: def convert_none(x: float) -> int:
@ -38,39 +35,33 @@ def convert_none(x: float) -> int:
class EmaneCommEffectModel(emanemodel.EmaneModel): class EmaneCommEffectModel(emanemodel.EmaneModel):
name: str = "emane_commeffect" name = "emane_commeffect"
shim_library: str = "commeffectshim"
shim_xml: str = "commeffectshim.xml" shim_library = "commeffectshim"
shim_defaults: dict[str, str] = {} shim_xml = "commeffectshim.xml"
config_shim: list[Configuration] = [] shim_defaults = {}
config_shim = []
# comm effect does not need the default phy and external configurations # comm effect does not need the default phy and external configurations
phy_config: list[Configuration] = [] phy_config = []
external_config: list[Configuration] = [] external_config = []
@classmethod @classmethod
def load(cls, emane_prefix: Path) -> None: def load(cls, emane_prefix: str) -> None:
cls._load_platform_config(emane_prefix) shim_xml_path = os.path.join(emane_prefix, "share/emane/manifest", cls.shim_xml)
shim_xml_path = emane_prefix / "share/emane/manifest" / cls.shim_xml
cls.config_shim = emanemanifest.parse(shim_xml_path, cls.shim_defaults) cls.config_shim = emanemanifest.parse(shim_xml_path, cls.shim_defaults)
@classmethod @classmethod
def configurations(cls) -> list[Configuration]: def configurations(cls) -> List[Configuration]:
return cls.platform_config + cls.config_shim return cls.config_shim
@classmethod @classmethod
def config_groups(cls) -> list[ConfigGroup]: def config_groups(cls) -> List[ConfigGroup]:
platform_len = len(cls.platform_config) return [ConfigGroup("CommEffect SHIM Parameters", 1, len(cls.configurations()))]
return [
ConfigGroup("Platform Parameters", 1, platform_len),
ConfigGroup(
"CommEffect SHIM Parameters",
platform_len + 1,
len(cls.configurations()),
),
]
def build_xml_files(self, config: dict[str, str], iface: CoreInterface) -> None: def build_xml_files(
self, config: Dict[str, str], interface: CoreInterface = None
) -> None:
""" """
Build the necessary nem and commeffect XMLs in the given path. Build the necessary nem and commeffect XMLs in the given path.
If an individual NEM has a nonstandard config, we need to build If an individual NEM has a nonstandard config, we need to build
@ -78,19 +69,26 @@ class EmaneCommEffectModel(emanemodel.EmaneModel):
nXXemane_commeffectnem.xml, nXXemane_commeffectshim.xml are used. nXXemane_commeffectnem.xml, nXXemane_commeffectshim.xml are used.
:param config: emane model configuration for the node and interface :param config: emane model configuration for the node and interface
:param iface: interface for the emane node :param interface: interface for the emane node
:return: nothing :return: nothing
""" """
# retrieve xml names
nem_name = emanexml.nem_file_name(self, interface)
shim_name = emanexml.shim_file_name(self, interface)
# create and write nem document # create and write nem document
nem_element = etree.Element("nem", name=f"{self.name} NEM", type="unstructured") nem_element = etree.Element("nem", name=f"{self.name} NEM", type="unstructured")
transport_name = emanexml.transport_file_name(iface) transport_type = "virtual"
etree.SubElement(nem_element, "transport", definition=transport_name) if interface and interface.transport_type == "raw":
transport_type = "raw"
transport_file = emanexml.transport_file_name(self.id, transport_type)
etree.SubElement(nem_element, "transport", definition=transport_file)
# set shim configuration # set shim configuration
nem_name = emanexml.nem_file_name(iface)
shim_name = emanexml.shim_file_name(iface)
etree.SubElement(nem_element, "shim", definition=shim_name) etree.SubElement(nem_element, "shim", definition=shim_name)
emanexml.create_node_file(iface.node, nem_element, "nem", nem_name)
nem_file = os.path.join(self.session.session_dir, nem_name)
emanexml.create_file(nem_element, "nem", nem_file)
# create and write shim document # create and write shim document
shim_element = etree.Element( shim_element = etree.Element(
@ -109,34 +107,48 @@ class EmaneCommEffectModel(emanemodel.EmaneModel):
ff = config["filterfile"] ff = config["filterfile"]
if ff.strip() != "": if ff.strip() != "":
emanexml.add_param(shim_element, "filterfile", ff) emanexml.add_param(shim_element, "filterfile", ff)
emanexml.create_node_file(iface.node, shim_element, "shim", shim_name)
# create transport xml shim_file = os.path.join(self.session.session_dir, shim_name)
emanexml.create_transport_xml(iface, config) emanexml.create_file(shim_element, "shim", shim_file)
def linkconfig( def linkconfig(
self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None self,
netif: CoreInterface,
bw: float = None,
delay: float = None,
loss: float = None,
duplicate: float = None,
jitter: float = None,
netif2: CoreInterface = None,
) -> None: ) -> None:
""" """
Generate CommEffect events when a Link Message is received having Generate CommEffect events when a Link Message is received having
link parameters. link parameters.
""" """
if iface is None or iface2 is None: service = self.session.emane.service
logger.warning("%s: missing NEM information", self.name) if service is None:
logging.warning("%s: EMANE event service unavailable", self.name)
return return
if netif is None or netif2 is None:
logging.warning("%s: missing NEM information", self.name)
return
# TODO: batch these into multiple events per transmission # TODO: batch these into multiple events per transmission
# TODO: may want to split out seconds portion of delay and jitter # TODO: may want to split out seconds portion of delay and jitter
event = CommEffectEvent() event = CommEffectEvent()
nem1 = self.session.emane.get_nem_id(iface) emane_node = self.session.get_node(self.id)
nem2 = self.session.emane.get_nem_id(iface2) nemid = emane_node.getnemid(netif)
logger.info("sending comm effect event") nemid2 = emane_node.getnemid(netif2)
mbw = bw
logging.info("sending comm effect event")
event.append( event.append(
nem1, nemid,
latency=convert_none(options.delay), latency=convert_none(delay),
jitter=convert_none(options.jitter), jitter=convert_none(jitter),
loss=convert_none(options.loss), loss=convert_none(loss),
duplicate=convert_none(options.dup), duplicate=convert_none(duplicate),
unicast=int(convert_none(options.bandwidth)), unicast=int(convert_none(bw)),
broadcast=int(convert_none(options.bandwidth)), broadcast=int(convert_none(mbw)),
) )
self.session.emane.publish_event(nem2, event) service.publish(nemid2, event)

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,9 @@
import logging import logging
from pathlib import Path from typing import Dict, List
from core.config import Configuration from core.config import Configuration
from core.emulator.enumerations import ConfigDataTypes from core.emulator.enumerations import ConfigDataTypes
logger = logging.getLogger(__name__)
manifest = None manifest = None
try: try:
from emane.shell import manifest from emane.shell import manifest
@ -13,8 +11,7 @@ except ImportError:
try: try:
from emanesh import manifest from emanesh import manifest
except ImportError: except ImportError:
manifest = None logging.debug("compatible emane python bindings not installed")
logger.debug("compatible emane python bindings not installed")
def _type_value(config_type: str) -> ConfigDataTypes: def _type_value(config_type: str) -> ConfigDataTypes:
@ -32,7 +29,7 @@ def _type_value(config_type: str) -> ConfigDataTypes:
return ConfigDataTypes[config_type] return ConfigDataTypes[config_type]
def _get_possible(config_type: str, config_regex: str) -> list[str]: def _get_possible(config_type: str, config_regex: str) -> List[str]:
""" """
Retrieve possible config value options based on emane regexes. Retrieve possible config value options based on emane regexes.
@ -50,7 +47,7 @@ def _get_possible(config_type: str, config_regex: str) -> list[str]:
return [] return []
def _get_default(config_type_name: str, config_value: list[str]) -> str: def _get_default(config_type_name: str, config_value: List[str]) -> str:
""" """
Convert default configuration values to one used by core. Convert default configuration values to one used by core.
@ -73,10 +70,9 @@ def _get_default(config_type_name: str, config_value: list[str]) -> str:
return config_default return config_default
def parse(manifest_path: Path, defaults: dict[str, str]) -> list[Configuration]: def parse(manifest_path: str, defaults: Dict[str, str]) -> List[Configuration]:
""" """
Parses a valid emane manifest file and converts the provided configuration values Parses a valid emane manifest file and converts the provided configuration values into ones used by core.
into ones used by core.
:param manifest_path: absolute manifest file path :param manifest_path: absolute manifest file path
:param defaults: used to override default values for configurations :param defaults: used to override default values for configurations
@ -88,7 +84,7 @@ def parse(manifest_path: Path, defaults: dict[str, str]) -> list[Configuration]:
return [] return []
# load configuration file # load configuration file
manifest_file = manifest.Manifest(str(manifest_path)) manifest_file = manifest.Manifest(manifest_path)
manifest_configurations = manifest_file.getAllConfiguration() manifest_configurations = manifest_file.getAllConfiguration()
configurations = [] configurations = []
@ -119,8 +115,8 @@ def parse(manifest_path: Path, defaults: dict[str, str]) -> list[Configuration]:
config_descriptions = f"{config_descriptions} file" config_descriptions = f"{config_descriptions} file"
configuration = Configuration( configuration = Configuration(
id=config_name, _id=config_name,
type=config_type_value, _type=config_type_value,
default=config_default, default=config_default,
options=possible, options=possible,
label=config_descriptions, label=config_descriptions,

View file

@ -2,21 +2,17 @@
Defines Emane Models used within CORE. Defines Emane Models used within CORE.
""" """
import logging import logging
from pathlib import Path import os
from typing import Optional from typing import Dict, List
from core.config import ConfigBool, ConfigGroup, ConfigString, Configuration from core.config import ConfigGroup, Configuration
from core.emane import emanemanifest from core.emane import emanemanifest
from core.emulator.data import LinkOptions from core.emulator.enumerations import ConfigDataTypes
from core.errors import CoreError from core.errors import CoreError
from core.location.mobility import WirelessModel from core.location.mobility import WirelessModel
from core.nodes.interface import CoreInterface from core.nodes.interface import CoreInterface
from core.xml import emanexml from core.xml import emanexml
logger = logging.getLogger(__name__)
DEFAULT_DEV: str = "ctrl0"
MANIFEST_PATH: str = "share/emane/manifest"
class EmaneModel(WirelessModel): class EmaneModel(WirelessModel):
""" """
@ -25,150 +21,160 @@ class EmaneModel(WirelessModel):
configurable parameters. Helper functions also live here. configurable parameters. Helper functions also live here.
""" """
# default platform configuration settings
platform_controlport: str = "controlportendpoint"
platform_xml: str = "nemmanager.xml"
platform_defaults: dict[str, str] = {
"eventservicedevice": DEFAULT_DEV,
"eventservicegroup": "224.1.2.8:45703",
"otamanagerdevice": DEFAULT_DEV,
"otamanagergroup": "224.1.2.8:45702",
}
platform_config: list[Configuration] = []
# default mac configuration settings # default mac configuration settings
mac_library: Optional[str] = None mac_library = None
mac_xml: Optional[str] = None mac_xml = None
mac_defaults: dict[str, str] = {} mac_defaults = {}
mac_config: list[Configuration] = [] mac_config = []
# default phy configuration settings, using the universal model # default phy configuration settings, using the universal model
phy_library: Optional[str] = None phy_library = None
phy_xml: str = "emanephy.xml" phy_xml = "emanephy.xml"
phy_defaults: dict[str, str] = { phy_defaults = {"subid": "1", "propagationmodel": "2ray", "noisemode": "none"}
"subid": "1", phy_config = []
"propagationmodel": "2ray",
"noisemode": "none",
}
phy_config: list[Configuration] = []
# support for external configurations # support for external configurations
external_config: list[Configuration] = [ external_config = [
ConfigBool(id="external", default="0"), Configuration("external", ConfigDataTypes.BOOL, default="0"),
ConfigString(id="platformendpoint", default="127.0.0.1:40001"), Configuration(
ConfigString(id="transportendpoint", default="127.0.0.1:50002"), "platformendpoint", ConfigDataTypes.STRING, default="127.0.0.1:40001"
),
Configuration(
"transportendpoint", ConfigDataTypes.STRING, default="127.0.0.1:50002"
),
] ]
config_ignore: set[str] = set() config_ignore = set()
@classmethod @classmethod
def load(cls, emane_prefix: Path) -> None: def load(cls, emane_prefix: str) -> None:
""" """
Called after being loaded within the EmaneManager. Provides configured Called after being loaded within the EmaneManager. Provides configured emane_prefix for
emane_prefix for parsing xml files. parsing xml files.
:param emane_prefix: configured emane prefix path :param emane_prefix: configured emane prefix path
:return: nothing :return: nothing
""" """
cls._load_platform_config(emane_prefix) manifest_path = "share/emane/manifest"
# load mac configuration # load mac configuration
mac_xml_path = emane_prefix / MANIFEST_PATH / cls.mac_xml mac_xml_path = os.path.join(emane_prefix, manifest_path, cls.mac_xml)
cls.mac_config = emanemanifest.parse(mac_xml_path, cls.mac_defaults) cls.mac_config = emanemanifest.parse(mac_xml_path, cls.mac_defaults)
# load phy configuration # load phy configuration
phy_xml_path = emane_prefix / MANIFEST_PATH / cls.phy_xml phy_xml_path = os.path.join(emane_prefix, manifest_path, cls.phy_xml)
cls.phy_config = emanemanifest.parse(phy_xml_path, cls.phy_defaults) cls.phy_config = emanemanifest.parse(phy_xml_path, cls.phy_defaults)
@classmethod @classmethod
def _load_platform_config(cls, emane_prefix: Path) -> None: def configurations(cls) -> List[Configuration]:
platform_xml_path = emane_prefix / MANIFEST_PATH / cls.platform_xml
cls.platform_config = emanemanifest.parse(
platform_xml_path, cls.platform_defaults
)
# remove controlport configuration, since core will set this directly
controlport_index = None
for index, configuration in enumerate(cls.platform_config):
if configuration.id == cls.platform_controlport:
controlport_index = index
break
if controlport_index is not None:
cls.platform_config.pop(controlport_index)
@classmethod
def configurations(cls) -> list[Configuration]:
""" """
Returns the combination all all configurations (mac, phy, and external). Returns the combination all all configurations (mac, phy, and external).
:return: all configurations :return: all configurations
""" """
return ( return cls.mac_config + cls.phy_config + cls.external_config
cls.platform_config + cls.mac_config + cls.phy_config + cls.external_config
)
@classmethod @classmethod
def config_groups(cls) -> list[ConfigGroup]: def config_groups(cls) -> List[ConfigGroup]:
""" """
Returns the defined configuration groups. Returns the defined configuration groups.
:return: list of configuration groups. :return: list of configuration groups.
""" """
platform_len = len(cls.platform_config) mac_len = len(cls.mac_config)
mac_len = len(cls.mac_config) + platform_len
phy_len = len(cls.phy_config) + mac_len phy_len = len(cls.phy_config) + mac_len
config_len = len(cls.configurations()) config_len = len(cls.configurations())
return [ return [
ConfigGroup("Platform Parameters", 1, platform_len), ConfigGroup("MAC Parameters", 1, mac_len),
ConfigGroup("MAC Parameters", platform_len + 1, mac_len),
ConfigGroup("PHY Parameters", mac_len + 1, phy_len), ConfigGroup("PHY Parameters", mac_len + 1, phy_len),
ConfigGroup("External Parameters", phy_len + 1, config_len), ConfigGroup("External Parameters", phy_len + 1, config_len),
] ]
def build_xml_files(self, config: dict[str, str], iface: CoreInterface) -> None: def build_xml_files(
self, config: Dict[str, str], interface: CoreInterface = None
) -> None:
""" """
Builds xml files for this emane model. Creates a nem.xml file that points to Builds xml files for this emane model. Creates a nem.xml file that points to
both mac.xml and phy.xml definitions. both mac.xml and phy.xml definitions.
:param config: emane model configuration for the node and interface :param config: emane model configuration for the node and interface
:param iface: interface to run emane for :param interface: interface for the emane node
:return: nothing :return: nothing
""" """
# create nem, mac, and phy xml files nem_name = emanexml.nem_file_name(self, interface)
emanexml.create_nem_xml(self, iface, config) mac_name = emanexml.mac_file_name(self, interface)
emanexml.create_mac_xml(self, iface, config) phy_name = emanexml.phy_file_name(self, interface)
emanexml.create_phy_xml(self, iface, config)
emanexml.create_transport_xml(iface, config)
def post_startup(self, iface: CoreInterface) -> None: # remote server for file
server = None
if interface is not None:
server = interface.node.server
# check if this is external
transport_type = "virtual"
if interface and interface.transport_type == "raw":
transport_type = "raw"
transport_name = emanexml.transport_file_name(self.id, transport_type)
# create nem xml file
nem_file = os.path.join(self.session.session_dir, nem_name)
emanexml.create_nem_xml(
self, config, nem_file, transport_name, mac_name, phy_name, server
)
# create mac xml file
mac_file = os.path.join(self.session.session_dir, mac_name)
emanexml.create_mac_xml(self, config, mac_file, server)
# create phy xml file
phy_file = os.path.join(self.session.session_dir, phy_name)
emanexml.create_phy_xml(self, config, phy_file, server)
def post_startup(self) -> None:
""" """
Logic to execute after the emane manager is finished with startup. Logic to execute after the emane manager is finished with startup.
:param iface: interface for post startup
:return: nothing :return: nothing
""" """
logger.debug("emane model(%s) has no post setup tasks", self.name) logging.debug("emane model(%s) has no post setup tasks", self.name)
def update(self, moved_ifaces: list[CoreInterface]) -> None: def update(self, moved: bool, moved_netifs: List[CoreInterface]) -> None:
""" """
Invoked from MobilityModel when nodes are moved; this causes Invoked from MobilityModel when nodes are moved; this causes
emane location events to be generated for the nodes in the moved emane location events to be generated for the nodes in the moved
list, making EmaneModels compatible with Ns2ScriptedMobility. list, making EmaneModels compatible with Ns2ScriptedMobility.
:param moved_ifaces: interfaces that were moved :param moved: were nodes moved
:param moved_netifs: interfaces that were moved
:return: nothing :return: nothing
""" """
try: try:
self.session.emane.set_nem_positions(moved_ifaces) wlan = self.session.get_node(self.id)
wlan.setnempositions(moved_netifs)
except CoreError: except CoreError:
logger.exception("error during update") logging.exception("error during update")
def linkconfig( def linkconfig(
self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None self,
netif: CoreInterface,
bw: float = None,
delay: float = None,
loss: float = None,
duplicate: float = None,
jitter: float = None,
netif2: CoreInterface = None,
) -> None: ) -> None:
""" """
Invoked when a Link Message is received. Default is unimplemented. Invoked when a Link Message is received. Default is unimplemented.
:param iface: interface one :param netif: interface one
:param options: options for configuring link :param bw: bandwidth to set to
:param iface2: interface two :param delay: packet delay to set to
:param loss: packet loss to set to
:param duplicate: duplicate percentage to set to
:param jitter: jitter to set to
:param netif2: interface two
:return: nothing :return: nothing
""" """
logger.warning("emane model(%s) does not support link config", self.name) logging.warning(
"emane model(%s) does not support link configuration", self.name
)

View file

@ -0,0 +1,22 @@
"""
ieee80211abg.py: EMANE IEEE 802.11abg model for CORE
"""
import os
from core.emane import emanemodel
class EmaneIeee80211abgModel(emanemodel.EmaneModel):
# model name
name = "emane_ieee80211abg"
# mac configuration
mac_library = "ieee80211abgmaclayer"
mac_xml = "ieee80211abgmaclayer.xml"
@classmethod
def load(cls, emane_prefix: str) -> None:
cls.mac_defaults["pcrcurveuri"] = os.path.join(
emane_prefix, "share/emane/xml/models/mac/ieee80211abg/ieee80211pcr.xml"
)
super().load(emane_prefix)

View file

@ -2,45 +2,43 @@ import logging
import sched import sched
import threading import threading
import time import time
from typing import TYPE_CHECKING, Optional from typing import TYPE_CHECKING, Dict, List, Tuple
import netaddr
from lxml import etree from lxml import etree
from core.emane.nodes import EmaneNet
from core.emulator.data import LinkData from core.emulator.data import LinkData
from core.emulator.enumerations import LinkTypes, MessageFlags from core.emulator.enumerations import LinkTypes, MessageFlags
from core.nodes.network import CtrlNet from core.nodes.network import CtrlNet
logger = logging.getLogger(__name__)
try: try:
from emane import shell from emane import shell
except ImportError: except ImportError:
try: try:
from emanesh import shell from emanesh import shell
except ImportError: except ImportError:
shell = None logging.debug("compatible emane python bindings not installed")
logger.debug("compatible emane python bindings not installed")
if TYPE_CHECKING: if TYPE_CHECKING:
from core.emane.emanemanager import EmaneManager from core.emane.emanemanager import EmaneManager
MAC_COMPONENT_INDEX: int = 1 DEFAULT_PORT = 47_000
EMANE_RFPIPE: str = "rfpipemaclayer" MAC_COMPONENT_INDEX = 1
EMANE_80211: str = "ieee80211abgmaclayer" EMANE_RFPIPE = "rfpipemaclayer"
EMANE_TDMA: str = "tdmaeventschedulerradiomodel" EMANE_80211 = "ieee80211abgmaclayer"
SINR_TABLE: str = "NeighborStatusTable" EMANE_TDMA = "tdmaeventschedulerradiomodel"
NEM_SELF: int = 65535 SINR_TABLE = "NeighborStatusTable"
NEM_SELF = 65535
class LossTable: class LossTable:
def __init__(self, losses: dict[float, float]) -> None: def __init__(self, losses: Dict[float, float]) -> None:
self.losses: dict[float, float] = losses self.losses = losses
self.sinrs: list[float] = sorted(self.losses.keys()) self.sinrs = sorted(self.losses.keys())
self.loss_lookup: dict[int, float] = {} self.loss_lookup = {}
for index, value in enumerate(self.sinrs): for index, value in enumerate(self.sinrs):
self.loss_lookup[index] = self.losses[value] self.loss_lookup[index] = self.losses[value]
self.mac_id: Optional[str] = None self.mac_id = None
def get_loss(self, sinr: float) -> float: def get_loss(self, sinr: float) -> float:
index = self._get_index(sinr) index = self._get_index(sinr)
@ -56,11 +54,11 @@ class LossTable:
class EmaneLink: class EmaneLink:
def __init__(self, from_nem: int, to_nem: int, sinr: float) -> None: def __init__(self, from_nem: int, to_nem: int, sinr: float) -> None:
self.from_nem: int = from_nem self.from_nem = from_nem
self.to_nem: int = to_nem self.to_nem = to_nem
self.sinr: float = sinr self.sinr = sinr
self.last_seen: Optional[float] = None self.last_seen = None
self.updated: bool = False self.updated = False
self.touch() self.touch()
def update(self, sinr: float) -> None: def update(self, sinr: float) -> None:
@ -79,12 +77,10 @@ class EmaneLink:
class EmaneClient: class EmaneClient:
def __init__(self, address: str, port: int) -> None: def __init__(self, address: str) -> None:
self.address: str = address self.address = address
self.client: shell.ControlPortClient = shell.ControlPortClient( self.client = shell.ControlPortClient(self.address, DEFAULT_PORT)
self.address, port self.nems = {}
)
self.nems: dict[int, LossTable] = {}
self.setup() self.setup()
def setup(self) -> None: def setup(self) -> None:
@ -93,7 +89,7 @@ class EmaneClient:
# get mac config # get mac config
mac_id, _, emane_model = components[MAC_COMPONENT_INDEX] mac_id, _, emane_model = components[MAC_COMPONENT_INDEX]
mac_config = self.client.getConfiguration(mac_id) mac_config = self.client.getConfiguration(mac_id)
logger.debug( logging.debug(
"address(%s) nem(%s) emane(%s)", self.address, nem_id, emane_model "address(%s) nem(%s) emane(%s)", self.address, nem_id, emane_model
) )
@ -103,14 +99,14 @@ class EmaneClient:
elif emane_model == EMANE_RFPIPE: elif emane_model == EMANE_RFPIPE:
loss_table = self.handle_rfpipe(mac_config) loss_table = self.handle_rfpipe(mac_config)
else: else:
logger.warning("unknown emane link model: %s", emane_model) logging.warning("unknown emane link model: %s", emane_model)
continue continue
logger.info("monitoring links nem(%s) model(%s)", nem_id, emane_model) logging.info("monitoring links nem(%s) model(%s)", nem_id, emane_model)
loss_table.mac_id = mac_id loss_table.mac_id = mac_id
self.nems[nem_id] = loss_table self.nems[nem_id] = loss_table
def check_links( def check_links(
self, links: dict[tuple[int, int], EmaneLink], loss_threshold: int self, links: Dict[Tuple[int, int], EmaneLink], loss_threshold: int
) -> None: ) -> None:
for from_nem, loss_table in self.nems.items(): for from_nem, loss_table in self.nems.items():
tables = self.client.getStatisticTable(loss_table.mac_id, (SINR_TABLE,)) tables = self.client.getStatisticTable(loss_table.mac_id, (SINR_TABLE,))
@ -138,14 +134,14 @@ class EmaneClient:
link = EmaneLink(from_nem, to_nem, sinr) link = EmaneLink(from_nem, to_nem, sinr)
links[link_key] = link links[link_key] = link
def handle_tdma(self, config: dict[str, tuple]): def handle_tdma(self, config: Dict[str, Tuple]):
pcr = config["pcrcurveuri"][0][0] pcr = config["pcrcurveuri"][0][0]
logger.debug("tdma pcr: %s", pcr) logging.debug("tdma pcr: %s", pcr)
def handle_80211(self, config: dict[str, tuple]) -> LossTable: def handle_80211(self, config: Dict[str, Tuple]) -> LossTable:
unicastrate = config["unicastrate"][0][0] unicastrate = config["unicastrate"][0][0]
pcr = config["pcrcurveuri"][0][0] pcr = config["pcrcurveuri"][0][0]
logger.debug("80211 pcr: %s", pcr) logging.debug("80211 pcr: %s", pcr)
tree = etree.parse(pcr) tree = etree.parse(pcr)
root = tree.getroot() root = tree.getroot()
table = root.find("table") table = root.find("table")
@ -159,9 +155,9 @@ class EmaneClient:
losses[sinr] = por losses[sinr] = por
return LossTable(losses) return LossTable(losses)
def handle_rfpipe(self, config: dict[str, tuple]) -> LossTable: def handle_rfpipe(self, config: Dict[str, Tuple]) -> LossTable:
pcr = config["pcrcurveuri"][0][0] pcr = config["pcrcurveuri"][0][0]
logger.debug("rfpipe pcr: %s", pcr) logging.debug("rfpipe pcr: %s", pcr)
tree = etree.parse(pcr) tree = etree.parse(pcr)
root = tree.getroot() root = tree.getroot()
table = root.find("table") table = root.find("table")
@ -178,24 +174,23 @@ class EmaneClient:
class EmaneLinkMonitor: class EmaneLinkMonitor:
def __init__(self, emane_manager: "EmaneManager") -> None: def __init__(self, emane_manager: "EmaneManager") -> None:
self.emane_manager: "EmaneManager" = emane_manager self.emane_manager = emane_manager
self.clients: list[EmaneClient] = [] self.clients = []
self.links: dict[tuple[int, int], EmaneLink] = {} self.links = {}
self.complete_links: set[tuple[int, int]] = set() self.complete_links = set()
self.loss_threshold: Optional[int] = None self.loss_threshold = None
self.link_interval: Optional[int] = None self.link_interval = None
self.link_timeout: Optional[int] = None self.link_timeout = None
self.scheduler: Optional[sched.scheduler] = None self.scheduler = None
self.running: bool = False self.running = False
def start(self) -> None: def start(self) -> None:
options = self.emane_manager.session.options self.loss_threshold = int(self.emane_manager.get_config("loss_threshold"))
self.loss_threshold = options.get_int("loss_threshold") self.link_interval = int(self.emane_manager.get_config("link_interval"))
self.link_interval = options.get_int("link_interval") self.link_timeout = int(self.emane_manager.get_config("link_timeout"))
self.link_timeout = options.get_int("link_timeout")
self.initialize() self.initialize()
if not self.clients: if not self.clients:
logger.info("no valid emane models to monitor links") logging.info("no valid emane models to monitor links")
return return
self.scheduler = sched.scheduler() self.scheduler = sched.scheduler()
self.scheduler.enter(0, 0, self.check_links) self.scheduler.enter(0, 0, self.check_links)
@ -205,28 +200,25 @@ class EmaneLinkMonitor:
def initialize(self) -> None: def initialize(self) -> None:
addresses = self.get_addresses() addresses = self.get_addresses()
for address, port in addresses: for address in addresses:
client = EmaneClient(address, port) client = EmaneClient(address)
if client.nems: if client.nems:
self.clients.append(client) self.clients.append(client)
def get_addresses(self) -> list[tuple[str, int]]: def get_addresses(self) -> List[str]:
addresses = [] addresses = []
nodes = self.emane_manager.getnodes() nodes = self.emane_manager.getnodes()
for node in nodes: for node in nodes:
control = None for netif in node.netifs():
ports = [] if isinstance(netif.net, CtrlNet):
for iface in node.get_ifaces(): ip4 = None
if isinstance(iface.net, CtrlNet): for x in netif.addrlist:
ip4 = iface.get_ip4() address, prefix = x.split("/")
if netaddr.valid_ipv4(address):
ip4 = address
if ip4: if ip4:
control = str(ip4.ip) addresses.append(ip4)
if isinstance(iface.net, EmaneNet): break
port = self.emane_manager.get_nem_port(iface)
ports.append(port)
if control:
for port in ports:
addresses.append((control, port))
return addresses return addresses
def check_links(self) -> None: def check_links(self) -> None:
@ -237,7 +229,7 @@ class EmaneLinkMonitor:
client.check_links(self.links, self.loss_threshold) client.check_links(self.links, self.loss_threshold)
except shell.ControlPortException: except shell.ControlPortException:
if self.running: if self.running:
logger.exception("link monitor error") logging.exception("link monitor error")
# find new links # find new links
current_links = set(self.links.keys()) current_links = set(self.links.keys())
@ -273,48 +265,63 @@ class EmaneLinkMonitor:
if self.running: if self.running:
self.scheduler.enter(self.link_interval, 0, self.check_links) self.scheduler.enter(self.link_interval, 0, self.check_links)
def get_complete_id(self, link_id: tuple[int, int]) -> tuple[int, int]: def get_complete_id(self, link_id: Tuple[int, int]) -> Tuple[int, int]:
value1, value2 = link_id value_one, value_two = link_id
if value1 < value2: if value_one < value_two:
return value1, value2 return value_one, value_two
else: else:
return value2, value1 return value_two, value_one
def is_complete_link(self, link_id: tuple[int, int]) -> bool: def is_complete_link(self, link_id: Tuple[int, int]) -> bool:
reverse_id = link_id[1], link_id[0] reverse_id = link_id[1], link_id[0]
return link_id in self.links and reverse_id in self.links return link_id in self.links and reverse_id in self.links
def get_link_label(self, link_id: tuple[int, int]) -> str: def get_link_label(self, link_id: Tuple[int, int]) -> str:
source_id = tuple(sorted(link_id)) source_id = tuple(sorted(link_id))
source_link = self.links[source_id] source_link = self.links[source_id]
dest_id = link_id[::-1] dest_id = link_id[::-1]
dest_link = self.links[dest_id] dest_link = self.links[dest_id]
return f"{source_link.sinr:.1f} / {dest_link.sinr:.1f}" return f"{source_link.sinr:.1f} / {dest_link.sinr:.1f}"
def send_link(self, message_type: MessageFlags, link_id: tuple[int, int]) -> None: def send_link(self, message_type: MessageFlags, link_id: Tuple[int, int]) -> None:
nem1, nem2 = link_id nem_one, nem_two = link_id
link = self.emane_manager.get_nem_link(nem1, nem2, message_type) emane_one, netif = self.emane_manager.nemlookup(nem_one)
if link: if not emane_one or not netif:
label = self.get_link_label(link_id) logging.error("invalid nem: %s", nem_one)
link.label = label return
self.emane_manager.session.broadcast_link(link) node_one = netif.node
emane_two, netif = self.emane_manager.nemlookup(nem_two)
if not emane_two or not netif:
logging.error("invalid nem: %s", nem_two)
return
node_two = netif.node
logging.debug(
"%s emane link from %s(%s) to %s(%s)",
message_type.name,
node_one.name,
nem_one,
node_two.name,
nem_two,
)
label = self.get_link_label(link_id)
self.send_message(message_type, label, node_one.id, node_two.id, emane_one.id)
def send_message( def send_message(
self, self,
message_type: MessageFlags, message_type: MessageFlags,
label: str, label: str,
node1: int, node_one: int,
node2: int, node_two: int,
emane_id: int, emane_id: int,
) -> None: ) -> None:
color = self.emane_manager.session.get_link_color(emane_id) color = self.emane_manager.session.get_link_color(emane_id)
link_data = LinkData( link_data = LinkData(
message_type=message_type, message_type=message_type,
type=LinkTypes.WIRELESS,
label=label, label=label,
node1_id=node1, node1_id=node_one,
node2_id=node2, node2_id=node_two,
network_id=emane_id, network_id=emane_id,
link_type=LinkTypes.WIRELESS,
color=color, color=color,
) )
self.emane_manager.session.broadcast_link(link_data) self.emane_manager.session.broadcast_link(link_data)

View file

@ -1,69 +0,0 @@
import logging
import pkgutil
from pathlib import Path
from core import utils
from core.emane import models as emane_models
from core.emane.emanemodel import EmaneModel
from core.errors import CoreError
logger = logging.getLogger(__name__)
class EmaneModelManager:
models: dict[str, type[EmaneModel]] = {}
@classmethod
def load_locals(cls, emane_prefix: Path) -> list[str]:
"""
Load local core emane models and make them available.
:param emane_prefix: installed emane prefix
:return: list of errors encountered loading emane models
"""
errors = []
for module_info in pkgutil.walk_packages(
emane_models.__path__, f"{emane_models.__name__}."
):
models = utils.load_module(module_info.name, EmaneModel)
for model in models:
logger.debug("loading emane model: %s", model.name)
try:
model.load(emane_prefix)
cls.models[model.name] = model
except CoreError as e:
errors.append(model.name)
logger.debug("not loading emane model(%s): %s", model.name, e)
return errors
@classmethod
def load(cls, path: Path, emane_prefix: Path) -> list[str]:
"""
Search and load custom emane models and make them available.
:param path: path to search for custom emane models
:param emane_prefix: installed emane prefix
:return: list of errors encountered loading emane models
"""
subdirs = [x for x in path.iterdir() if x.is_dir()]
subdirs.append(path)
errors = []
for subdir in subdirs:
logger.debug("loading emane models from: %s", subdir)
models = utils.load_classes(subdir, EmaneModel)
for model in models:
logger.debug("loading emane model: %s", model.name)
try:
model.load(emane_prefix)
cls.models[model.name] = model
except CoreError as e:
errors.append(model.name)
logger.debug("not loading emane model(%s): %s", model.name, e)
return errors
@classmethod
def get(cls, name: str) -> type[EmaneModel]:
model = cls.models.get(name)
if model is None:
raise CoreError(f"emame model does not exist {name}")
return model

View file

@ -1,32 +0,0 @@
"""
EMANE Bypass model for CORE
"""
from pathlib import Path
from core.config import ConfigBool, Configuration
from core.emane import emanemodel
class EmaneBypassModel(emanemodel.EmaneModel):
name: str = "emane_bypass"
# values to ignore, when writing xml files
config_ignore: set[str] = {"none"}
# mac definitions
mac_library: str = "bypassmaclayer"
mac_config: list[Configuration] = [
ConfigBool(
id="none",
default="0",
label="There are no parameters for the bypass model.",
)
]
# phy definitions
phy_library: str = "bypassphylayer"
phy_config: list[Configuration] = []
@classmethod
def load(cls, emane_prefix: Path) -> None:
cls._load_platform_config(emane_prefix)

View file

@ -1,22 +0,0 @@
"""
ieee80211abg.py: EMANE IEEE 802.11abg model for CORE
"""
from pathlib import Path
from core.emane import emanemodel
class EmaneIeee80211abgModel(emanemodel.EmaneModel):
# model name
name: str = "emane_ieee80211abg"
# mac configuration
mac_library: str = "ieee80211abgmaclayer"
mac_xml: str = "ieee80211abgmaclayer.xml"
@classmethod
def load(cls, emane_prefix: Path) -> None:
cls.mac_defaults["pcrcurveuri"] = str(
emane_prefix / "share/emane/xml/models/mac/ieee80211abg/ieee80211pcr.xml"
)
super().load(emane_prefix)

View file

@ -1,22 +0,0 @@
"""
rfpipe.py: EMANE RF-PIPE model for CORE
"""
from pathlib import Path
from core.emane import emanemodel
class EmaneRfPipeModel(emanemodel.EmaneModel):
# model name
name: str = "emane_rfpipe"
# mac configuration
mac_library: str = "rfpipemaclayer"
mac_xml: str = "rfpipemaclayer.xml"
@classmethod
def load(cls, emane_prefix: Path) -> None:
cls.mac_defaults["pcrcurveuri"] = str(
emane_prefix / "share/emane/xml/models/mac/rfpipe/rfpipepcr.xml"
)
super().load(emane_prefix)

View file

@ -1,65 +0,0 @@
"""
tdma.py: EMANE TDMA model bindings for CORE
"""
import logging
from pathlib import Path
from core import constants, utils
from core.config import ConfigString
from core.emane import emanemodel
from core.emane.nodes import EmaneNet
from core.nodes.interface import CoreInterface
logger = logging.getLogger(__name__)
class EmaneTdmaModel(emanemodel.EmaneModel):
# model name
name: str = "emane_tdma"
# mac configuration
mac_library: str = "tdmaeventschedulerradiomodel"
mac_xml: str = "tdmaeventschedulerradiomodel.xml"
# add custom schedule options and ignore it when writing emane xml
schedule_name: str = "schedule"
default_schedule: Path = (
constants.CORE_DATA_DIR / "examples" / "tdma" / "schedule.xml"
)
config_ignore: set[str] = {schedule_name}
@classmethod
def load(cls, emane_prefix: Path) -> None:
cls.mac_defaults["pcrcurveuri"] = str(
emane_prefix
/ "share/emane/xml/models/mac/tdmaeventscheduler/tdmabasemodelpcr.xml"
)
super().load(emane_prefix)
config_item = ConfigString(
id=cls.schedule_name,
default=str(cls.default_schedule),
label="TDMA schedule file (core)",
)
cls.mac_config.insert(0, config_item)
def post_startup(self, iface: CoreInterface) -> None:
# get configured schedule
emane_net = self.session.get_node(self.id, EmaneNet)
config = self.session.emane.get_iface_config(emane_net, iface)
schedule = Path(config[self.schedule_name])
if not schedule.is_file():
logger.error("ignoring invalid tdma schedule: %s", schedule)
return
# initiate tdma schedule
nem_id = self.session.emane.get_nem_id(iface)
if not nem_id:
logger.error("could not find nem for interface")
return
service = self.session.emane.nem_service.get(nem_id)
if service:
device = service.device
logger.info(
"setting up tdma schedule: schedule(%s) device(%s)", schedule, device
)
utils.cmd(f"emaneevent-tdmaschedule -i {device} {schedule}")

View file

@ -4,23 +4,18 @@ share the same MAC+PHY model.
""" """
import logging import logging
import time from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Type
from dataclasses import dataclass
from typing import TYPE_CHECKING, Callable, Optional, Union
from core.emulator.data import InterfaceData, LinkData, LinkOptions
from core.emulator.distributed import DistributedServer from core.emulator.distributed import DistributedServer
from core.emulator.enumerations import MessageFlags, RegisterTlvs from core.emulator.enumerations import LinkTypes, NodeTypes, RegisterTlvs
from core.errors import CoreCommandError, CoreError from core.nodes.base import CoreNetworkBase
from core.nodes.base import CoreNetworkBase, CoreNode, NodeOptions
from core.nodes.interface import CoreInterface from core.nodes.interface import CoreInterface
logger = logging.getLogger(__name__)
if TYPE_CHECKING: if TYPE_CHECKING:
from core.emane.emanemodel import EmaneModel
from core.emulator.session import Session from core.emulator.session import Session
from core.location.mobility import WayPointMobility from core.location.mobility import WirelessModel
WirelessModelType = Type[WirelessModel]
try: try:
from emane.events import LocationEvent from emane.events import LocationEvent
@ -28,122 +23,7 @@ except ImportError:
try: try:
from emanesh.events import LocationEvent from emanesh.events import LocationEvent
except ImportError: except ImportError:
LocationEvent = None logging.debug("compatible emane python bindings not installed")
logger.debug("compatible emane python bindings not installed")
class TunTap(CoreInterface):
"""
TUN/TAP virtual device in TAP mode
"""
def __init__(
self,
_id: int,
name: str,
localname: str,
use_ovs: bool,
node: CoreNode = None,
server: "DistributedServer" = None,
) -> None:
super().__init__(_id, name, localname, use_ovs, node=node, server=server)
self.node: CoreNode = node
def startup(self) -> None:
"""
Startup logic for a tunnel tap.
:return: nothing
"""
self.up = True
def shutdown(self) -> None:
"""
Shutdown functionality for a tunnel tap.
:return: nothing
"""
if not self.up:
return
self.up = False
def waitfor(
self, func: Callable[[], int], attempts: int = 10, maxretrydelay: float = 0.25
) -> bool:
"""
Wait for func() to return zero with exponential backoff.
:param func: function to wait for a result of zero
:param attempts: number of attempts to wait for a zero result
:param maxretrydelay: maximum retry delay
:return: True if wait succeeded, False otherwise
"""
delay = 0.01
result = False
for i in range(1, attempts + 1):
r = func()
if r == 0:
result = True
break
msg = f"attempt {i} failed with nonzero exit status {r}"
if i < attempts + 1:
msg += ", retrying..."
logger.info(msg)
time.sleep(delay)
delay += delay
if delay > maxretrydelay:
delay = maxretrydelay
else:
msg += ", giving up"
logger.info(msg)
return result
def nodedevexists(self) -> int:
"""
Checks if device exists.
:return: 0 if device exists, 1 otherwise
"""
try:
self.node.node_net_client.device_show(self.name)
return 0
except CoreCommandError:
return 1
def waitfordevicenode(self) -> None:
"""
Check for presence of a node device - tap device may not appear right away waits.
:return: nothing
"""
logger.debug("waiting for device node: %s", self.name)
count = 0
while True:
result = self.waitfor(self.nodedevexists)
if result:
break
should_retry = count < 5
is_emane_running = self.node.session.emane.emanerunning(self.node)
if all([should_retry, is_emane_running]):
count += 1
else:
raise RuntimeError("node device failed to exist")
def set_ips(self) -> None:
"""
Set interface ip addresses.
:return: nothing
"""
self.waitfordevicenode()
for ip in self.ips():
self.node.node_net_client.create_address(self.name, str(ip))
@dataclass
class EmaneOptions(NodeOptions):
emane_model: str = None
"""name of emane model to associate an emane network to"""
class EmaneNet(CoreNetworkBase): class EmaneNet(CoreNetworkBase):
@ -153,138 +33,215 @@ class EmaneNet(CoreNetworkBase):
Emane controller object that exists in a session. Emane controller object that exists in a session.
""" """
apitype = NodeTypes.EMANE
linktype = LinkTypes.WIRED
type = "wlan"
is_emane = True
def __init__( def __init__(
self, self,
session: "Session", session: "Session",
_id: int = None, _id: int = None,
name: str = None, name: str = None,
start: bool = True,
server: DistributedServer = None, server: DistributedServer = None,
options: EmaneOptions = None,
) -> None: ) -> None:
options = options or EmaneOptions() super().__init__(session, _id, name, start, server)
super().__init__(session, _id, name, server, options) self.conf = ""
self.conf: str = "" self.up = False
self.mobility: Optional[WayPointMobility] = None self.nemidmap = {}
model_class = self.session.emane.get_model(options.emane_model) self.model = None
self.wireless_model: Optional["EmaneModel"] = model_class(self.session, self.id) self.mobility = None
if self.session.is_running():
self.session.emane.add_node(self)
@classmethod
def create_options(cls) -> EmaneOptions:
return EmaneOptions()
def linkconfig( def linkconfig(
self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None self,
netif: CoreInterface,
bw: float = None,
delay: float = None,
loss: float = None,
duplicate: float = None,
jitter: float = None,
netif2: CoreInterface = None,
) -> None: ) -> None:
""" """
The CommEffect model supports link configuration. The CommEffect model supports link configuration.
""" """
if not self.wireless_model: if not self.model:
return return
self.wireless_model.linkconfig(iface, options, iface2) self.model.linkconfig(
netif=netif,
bw=bw,
delay=delay,
loss=loss,
duplicate=duplicate,
jitter=jitter,
netif2=netif2,
)
def startup(self) -> None: def config(self, conf: str) -> None:
self.up = True self.conf = conf
def shutdown(self) -> None: def shutdown(self) -> None:
self.up = False
def link(self, iface1: CoreInterface, iface2: CoreInterface) -> None:
pass pass
def unlink(self, iface1: CoreInterface, iface2: CoreInterface) -> None: def link(self, netif1: CoreInterface, netif2: CoreInterface) -> None:
pass pass
def updatemodel(self, config: dict[str, str]) -> None: def unlink(self, netif1: CoreInterface, netif2: CoreInterface) -> None:
""" pass
Update configuration for the current model.
:param config: configuration to update model with def updatemodel(self, config: Dict[str, str]) -> None:
:return: nothing if not self.model:
""" raise ValueError("no model set to update for node(%s)", self.id)
if not self.wireless_model: logging.info(
raise CoreError(f"no model set to update for node({self.name})") "node(%s) updating model(%s): %s", self.id, self.model.name, config
logger.info(
"node(%s) updating model(%s): %s", self.id, self.wireless_model.name, config
) )
self.wireless_model.update_config(config) self.model.set_configs(config, node_id=self.id)
def setmodel( def setmodel(self, model: "WirelessModelType", config: Dict[str, str]) -> None:
self,
model: Union[type["EmaneModel"], type["WayPointMobility"]],
config: dict[str, str],
) -> None:
""" """
set the EmaneModel associated with this node set the EmaneModel associated with this node
""" """
logging.info("adding model: %s", model.name)
if model.config_type == RegisterTlvs.WIRELESS: if model.config_type == RegisterTlvs.WIRELESS:
self.wireless_model = model(session=self.session, _id=self.id) # EmaneModel really uses values from ConfigurableManager
self.wireless_model.update_config(config) # when buildnemxml() is called, not during init()
self.model = model(session=self.session, _id=self.id)
self.model.update_config(config)
elif model.config_type == RegisterTlvs.MOBILITY: elif model.config_type == RegisterTlvs.MOBILITY:
self.mobility = model(session=self.session, _id=self.id) self.mobility = model(session=self.session, _id=self.id)
self.mobility.update_config(config) self.mobility.update_config(config)
def links(self, flags: MessageFlags = MessageFlags.NONE) -> list[LinkData]: def setnemid(self, netif: CoreInterface, nemid: int) -> None:
links = []
emane_manager = self.session.emane
# gather current emane links
nem_ids = set()
for iface in self.get_ifaces():
nem_id = emane_manager.get_nem_id(iface)
nem_ids.add(nem_id)
emane_links = emane_manager.link_monitor.links
considered = set()
for link_key in emane_links:
considered_key = tuple(sorted(link_key))
if considered_key in considered:
continue
considered.add(considered_key)
nem1, nem2 = considered_key
# ignore links not related to this node
if nem1 not in nem_ids and nem2 not in nem_ids:
continue
# ignore incomplete links
if (nem2, nem1) not in emane_links:
continue
link = emane_manager.get_nem_link(nem1, nem2, flags)
if link:
links.append(link)
return links
def create_tuntap(self, node: CoreNode, iface_data: InterfaceData) -> CoreInterface:
""" """
Create a tuntap interface for the provided node. Record an interface to numerical ID mapping. The Emane controller
object manages and assigns these IDs for all NEMs.
:param node: node to create tuntap interface for
:param iface_data: interface data to create interface with
:return: created tuntap interface
""" """
with node.lock: self.nemidmap[netif] = nemid
if iface_data.id is not None and iface_data.id in node.ifaces:
raise CoreError( def getnemid(self, netif: CoreInterface) -> Optional[int]:
f"node({self.id}) interface({iface_data.id}) already exists" """
) Given an interface, return its numerical ID.
iface_id = ( """
iface_data.id if iface_data.id is not None else node.next_iface_id() if netif not in self.nemidmap:
return None
else:
return self.nemidmap[netif]
def getnemnetif(self, nemid: int) -> Optional[CoreInterface]:
"""
Given a numerical NEM ID, return its interface. This returns the
first interface that matches the given NEM ID.
"""
for netif in self.nemidmap:
if self.nemidmap[netif] == nemid:
return netif
return None
def netifs(self, sort: bool = True) -> List[CoreInterface]:
"""
Retrieve list of linked interfaces sorted by node number.
"""
return sorted(self._netif.values(), key=lambda ifc: ifc.node.id)
def installnetifs(self) -> None:
"""
Install TAP devices into their namespaces. This is done after
EMANE daemons have been started, because that is their only chance
to bind to the TAPs.
"""
if (
self.session.emane.genlocationevents()
and self.session.emane.service is None
):
warntxt = "unable to publish EMANE events because the eventservice "
warntxt += "Python bindings failed to load"
logging.error(warntxt)
for netif in self.netifs():
external = self.session.emane.get_config(
"external", self.id, self.model.name
) )
name = iface_data.name if iface_data.name is not None else f"eth{iface_id}" if external == "0":
session_id = self.session.short_session_id() netif.setaddrs()
localname = f"tap{node.id}.{iface_id}.{session_id}"
iface = TunTap(iface_id, name, localname, self.session.use_ovs(), node=node)
if iface_data.mac:
iface.set_mac(iface_data.mac)
for ip in iface_data.get_ips():
iface.add_ip(ip)
node.ifaces[iface_id] = iface
self.attach(iface)
if self.up:
iface.startup()
if self.session.is_running():
self.session.emane.start_iface(self, iface)
return iface
def adopt_iface(self, iface: CoreInterface, name: str) -> None: if not self.session.emane.genlocationevents():
raise CoreError( netif.poshook = None
f"emane network({self.name}) do not support adopting interfaces" continue
)
# at this point we register location handlers for generating
# EMANE location events
netif.poshook = self.setnemposition
netif.setposition()
def deinstallnetifs(self) -> None:
"""
Uninstall TAP devices. This invokes their shutdown method for
any required cleanup; the device may be actually removed when
emanetransportd terminates.
"""
for netif in self.netifs():
if "virtual" in netif.transport_type.lower():
netif.shutdown()
netif.poshook = None
def _nem_position(
self, netif: CoreInterface
) -> Optional[Tuple[int, float, float, float]]:
"""
Creates nem position for emane event for a given interface.
:param netif: interface to get nem emane position for
:return: nem position tuple, None otherwise
"""
nemid = self.getnemid(netif)
ifname = netif.localname
if nemid is None:
logging.info("nemid for %s is unknown", ifname)
return
node = netif.node
x, y, z = node.getposition()
lat, lon, alt = self.session.location.getgeo(x, y, z)
if node.position.alt is not None:
alt = node.position.alt
# altitude must be an integer or warning is printed
alt = int(round(alt))
return nemid, lon, lat, alt
def setnemposition(self, netif: CoreInterface) -> None:
"""
Publish a NEM location change event using the EMANE event service.
:param netif: interface to set nem position for
"""
if self.session.emane.service is None:
logging.info("position service not available")
return
position = self._nem_position(netif)
if position:
nemid, lon, lat, alt = position
event = LocationEvent()
event.append(nemid, latitude=lat, longitude=lon, altitude=alt)
self.session.emane.service.publish(0, event)
def setnempositions(self, moved_netifs: List[CoreInterface]) -> None:
"""
Several NEMs have moved, from e.g. a WaypointMobilityModel
calculation. Generate an EMANE Location Event having several
entries for each netif that has moved.
"""
if len(moved_netifs) == 0:
return
if self.session.emane.service is None:
logging.info("position service not available")
return
event = LocationEvent()
for netif in moved_netifs:
position = self._nem_position(netif)
if position:
nemid, lon, lat, alt = position
event.append(nemid, latitude=lat, longitude=lon, altitude=alt)
self.session.emane.service.publish(0, event)

View file

@ -0,0 +1,22 @@
"""
rfpipe.py: EMANE RF-PIPE model for CORE
"""
import os
from core.emane import emanemodel
class EmaneRfPipeModel(emanemodel.EmaneModel):
# model name
name = "emane_rfpipe"
# mac configuration
mac_library = "rfpipemaclayer"
mac_xml = "rfpipemaclayer.xml"
@classmethod
def load(cls, emane_prefix: str) -> None:
cls.mac_defaults["pcrcurveuri"] = os.path.join(
emane_prefix, "share/emane/xml/models/mac/rfpipe/rfpipepcr.xml"
)
super().load(emane_prefix)

66
daemon/core/emane/tdma.py Normal file
View file

@ -0,0 +1,66 @@
"""
tdma.py: EMANE TDMA model bindings for CORE
"""
import logging
import os
from core import constants, utils
from core.config import Configuration
from core.emane import emanemodel
from core.emulator.enumerations import ConfigDataTypes
class EmaneTdmaModel(emanemodel.EmaneModel):
# model name
name = "emane_tdma"
# mac configuration
mac_library = "tdmaeventschedulerradiomodel"
mac_xml = "tdmaeventschedulerradiomodel.xml"
# add custom schedule options and ignore it when writing emane xml
schedule_name = "schedule"
default_schedule = os.path.join(
constants.CORE_DATA_DIR, "examples", "tdma", "schedule.xml"
)
config_ignore = {schedule_name}
@classmethod
def load(cls, emane_prefix: str) -> None:
cls.mac_defaults["pcrcurveuri"] = os.path.join(
emane_prefix,
"share/emane/xml/models/mac/tdmaeventscheduler/tdmabasemodelpcr.xml",
)
super().load(emane_prefix)
cls.mac_config.insert(
0,
Configuration(
_id=cls.schedule_name,
_type=ConfigDataTypes.STRING,
default=cls.default_schedule,
label="TDMA schedule file (core)",
),
)
def post_startup(self) -> None:
"""
Logic to execute after the emane manager is finished with startup.
:return: nothing
"""
# get configured schedule
config = self.session.emane.get_configs(node_id=self.id, config_type=self.name)
if not config:
return
schedule = config[self.schedule_name]
# get the set event device
event_device = self.session.emane.event_device
# initiate tdma schedule
logging.info(
"setting up tdma schedule: schedule(%s) device(%s)", schedule, event_device
)
args = f"emaneevent-tdmaschedule -i {event_device} {schedule}"
utils.cmd(args)

View file

@ -1,67 +0,0 @@
from collections.abc import Callable
from typing import TypeVar, Union
from core.emulator.data import (
ConfigData,
EventData,
ExceptionData,
FileData,
LinkData,
NodeData,
)
from core.errors import CoreError
T = TypeVar(
"T", bound=Union[EventData, ExceptionData, NodeData, LinkData, FileData, ConfigData]
)
class BroadcastManager:
def __init__(self) -> None:
"""
Creates a BroadcastManager instance.
"""
self.handlers: dict[type[T], set[Callable[[T], None]]] = {}
def send(self, data: T) -> None:
"""
Retrieve handlers for data, and run all current handlers.
:param data: data to provide to handlers
:return: nothing
"""
handlers = self.handlers.get(type(data), set())
for handler in handlers:
handler(data)
def add_handler(self, data_type: type[T], handler: Callable[[T], None]) -> None:
"""
Add a handler for a given data type.
:param data_type: type of data to add handler for
:param handler: handler to add
:return: nothing
"""
handlers = self.handlers.setdefault(data_type, set())
if handler in handlers:
raise CoreError(
f"cannot add data({data_type}) handler({repr(handler)}), "
f"already exists"
)
handlers.add(handler)
def remove_handler(self, data_type: type[T], handler: Callable[[T], None]) -> None:
"""
Remove a handler for a given data type.
:param data_type: type of data to remove handler for
:param handler: handler to remove
:return: nothing
"""
handlers = self.handlers.get(data_type, set())
if handler not in handlers:
raise CoreError(
f"cannot remove data({data_type}) handler({repr(handler)}), "
f"does not exist"
)
handlers.remove(handler)

View file

@ -1,239 +0,0 @@
import logging
from typing import TYPE_CHECKING, Optional
from core import utils
from core.emulator.data import InterfaceData
from core.errors import CoreError
from core.nodes.base import CoreNode
from core.nodes.interface import DEFAULT_MTU
from core.nodes.network import CtrlNet
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from core.emulator.session import Session
CTRL_NET_ID: int = 9001
ETC_HOSTS_PATH: str = "/etc/hosts"
class ControlNetManager:
def __init__(self, session: "Session") -> None:
self.session: "Session" = session
self.etc_hosts_header: str = f"CORE session {self.session.id} host entries"
def _etc_hosts_enabled(self) -> bool:
"""
Determines if /etc/hosts should be configured.
:return: True if /etc/hosts should be configured, False otherwise
"""
return self.session.options.get_bool("update_etc_hosts", False)
def _get_server_ifaces(
self,
) -> tuple[None, Optional[str], Optional[str], Optional[str]]:
"""
Retrieve control net server interfaces.
:return: control net server interfaces
"""
d0 = self.session.options.get("controlnetif0")
if d0:
logger.error("controlnet0 cannot be assigned with a host interface")
d1 = self.session.options.get("controlnetif1")
d2 = self.session.options.get("controlnetif2")
d3 = self.session.options.get("controlnetif3")
return None, d1, d2, d3
def _get_prefixes(
self,
) -> tuple[Optional[str], Optional[str], Optional[str], Optional[str]]:
"""
Retrieve control net prefixes.
:return: control net prefixes
"""
p = self.session.options.get("controlnet")
p0 = self.session.options.get("controlnet0")
p1 = self.session.options.get("controlnet1")
p2 = self.session.options.get("controlnet2")
p3 = self.session.options.get("controlnet3")
if not p0 and p:
p0 = p
return p0, p1, p2, p3
def update_etc_hosts(self) -> None:
"""
Add the IP addresses of control interfaces to the /etc/hosts file.
:return: nothing
"""
if not self._etc_hosts_enabled():
return
control_net = self.get_control_net(0)
entries = ""
for iface in control_net.get_ifaces():
name = iface.node.name
for ip in iface.ips():
entries += f"{ip.ip} {name}\n"
logger.info("adding entries to /etc/hosts")
utils.file_munge(ETC_HOSTS_PATH, self.etc_hosts_header, entries)
def clear_etc_hosts(self) -> None:
"""
Clear IP addresses of control interfaces from the /etc/hosts file.
:return: nothing
"""
if not self._etc_hosts_enabled():
return
logger.info("removing /etc/hosts file entries")
utils.file_demunge(ETC_HOSTS_PATH, self.etc_hosts_header)
def get_control_net_index(self, dev: str) -> int:
"""
Retrieve control net index.
:param dev: device to get control net index for
:return: control net index, -1 otherwise
"""
if dev[0:4] == "ctrl" and int(dev[4]) in (0, 1, 2, 3):
index = int(dev[4])
if index == 0:
return index
if index < 4 and self._get_prefixes()[index] is not None:
return index
return -1
def get_control_net(self, index: int) -> Optional[CtrlNet]:
"""
Retrieve a control net based on index.
:param index: control net index
:return: control net when available, None otherwise
"""
try:
return self.session.get_node(CTRL_NET_ID + index, CtrlNet)
except CoreError:
return None
def add_control_net(
self, index: int, conf_required: bool = True
) -> Optional[CtrlNet]:
"""
Create a control network bridge as necessary. The conf_reqd flag,
when False, causes a control network bridge to be added even if
one has not been configured.
:param index: network index to add
:param conf_required: flag to check if conf is required
:return: control net node
"""
logger.info(
"checking to add control net index(%s) conf_required(%s)",
index,
conf_required,
)
# check for valid index
if not (0 <= index <= 3):
raise CoreError(f"invalid control net index({index})")
# return any existing control net bridge
control_net = self.get_control_net(index)
if control_net:
logger.info("control net index(%s) already exists", index)
return control_net
# retrieve prefix for current index
index_prefix = self._get_prefixes()[index]
if not index_prefix:
if conf_required:
return None
else:
index_prefix = CtrlNet.DEFAULT_PREFIX_LIST[index]
# retrieve valid prefix from old style values
prefixes = index_prefix.split()
if len(prefixes) > 1:
# a list of per-host prefixes is provided
try:
prefix = prefixes[0].split(":", 1)[1]
except IndexError:
prefix = prefixes[0]
else:
prefix = prefixes[0]
# use the updown script for control net 0 only
updown_script = None
if index == 0:
updown_script = self.session.options.get("controlnet_updown_script")
# build a new controlnet bridge
_id = CTRL_NET_ID + index
server_iface = self._get_server_ifaces()[index]
logger.info(
"adding controlnet(%s) prefix(%s) updown(%s) server interface(%s)",
_id,
prefix,
updown_script,
server_iface,
)
options = CtrlNet.create_options()
options.prefix = prefix
options.updown_script = updown_script
options.serverintf = server_iface
control_net = self.session.create_node(CtrlNet, False, _id, options=options)
control_net.brname = f"ctrl{index}.{self.session.short_session_id()}"
control_net.startup()
return control_net
def remove_control_net(self, index: int) -> None:
"""
Removes control net.
:param index: index of control net to remove
:return: nothing
"""
control_net = self.get_control_net(index)
if control_net:
logger.info("removing control net index(%s)", index)
self.session.delete_node(control_net.id)
def add_control_iface(self, node: CoreNode, index: int) -> None:
"""
Adds a control net interface to a node.
:param node: node to add control net interface to
:param index: index of control net to add interface to
:return: nothing
:raises CoreError: if control net doesn't exist, interface already exists,
or there is an error creating the interface
"""
control_net = self.get_control_net(index)
if not control_net:
raise CoreError(f"control net index({index}) does not exist")
iface_id = control_net.CTRLIF_IDX_BASE + index
if node.ifaces.get(iface_id):
raise CoreError(f"control iface({iface_id}) already exists")
try:
logger.info(
"node(%s) adding control net index(%s) interface(%s)",
node.name,
index,
iface_id,
)
ip4 = control_net.prefix[node.id]
ip4_mask = control_net.prefix.prefixlen
iface_data = InterfaceData(
id=iface_id,
name=f"ctrl{index}",
mac=utils.random_mac(),
ip4=ip4,
ip4_mask=ip4_mask,
mtu=DEFAULT_MTU,
)
iface = node.create_iface(iface_data)
control_net.attach(iface)
iface.control = True
except ValueError:
raise CoreError(
f"error adding control net interface to node({node.id}), "
f"invalid control net prefix({control_net.prefix}), "
"a longer prefix length may be required"
)

View file

@ -1,17 +1,34 @@
import atexit
import logging import logging
import os import os
from pathlib import Path import signal
import sys
from typing import Mapping, Type
from core import utils import core.services
from core import configservices
from core.configservice.manager import ConfigServiceManager from core.configservice.manager import ConfigServiceManager
from core.emane.modelmanager import EmaneModelManager
from core.emulator.session import Session from core.emulator.session import Session
from core.executables import get_requirements
from core.services.coreservices import ServiceManager from core.services.coreservices import ServiceManager
logger = logging.getLogger(__name__)
DEFAULT_EMANE_PREFIX: str = "/usr" def signal_handler(signal_number: int, _) -> None:
"""
Handle signals and force an exit with cleanup.
:param signal_number: signal number
:param _: ignored
:return: nothing
"""
logging.info("caught signal: %s", signal_number)
sys.exit(signal_number)
signal.signal(signal.SIGHUP, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGUSR1, signal_handler)
signal.signal(signal.SIGUSR2, signal_handler)
class CoreEmu: class CoreEmu:
@ -19,7 +36,7 @@ class CoreEmu:
Provides logic for creating and configuring CORE sessions and the nodes within them. Provides logic for creating and configuring CORE sessions and the nodes within them.
""" """
def __init__(self, config: dict[str, str] = None) -> None: def __init__(self, config: Mapping[str, str] = None) -> None:
""" """
Create a CoreEmu object. Create a CoreEmu object.
@ -29,83 +46,40 @@ class CoreEmu:
os.umask(0) os.umask(0)
# configuration # configuration
config = config if config else {} if config is None:
self.config: dict[str, str] = config config = {}
self.config = config
# session management # session management
self.sessions: dict[int, Session] = {} self.sessions = {}
# load services # load services
self.service_errors: list[str] = [] self.service_errors = []
self.service_manager: ConfigServiceManager = ConfigServiceManager() self.load_services()
self._load_services()
# check and load emane # config services
self.has_emane: bool = False self.service_manager = ConfigServiceManager()
self._load_emane() config_services_path = os.path.abspath(os.path.dirname(configservices.__file__))
self.service_manager.load(config_services_path)
# check executables exist on path
self._validate_env()
def _validate_env(self) -> None:
"""
Validates executables CORE depends on exist on path.
:return: nothing
:raises core.errors.CoreError: when an executable does not exist on path
"""
use_ovs = self.config.get("ovs") == "1"
for requirement in get_requirements(use_ovs):
utils.which(requirement, required=True)
def _load_services(self) -> None:
"""
Loads default and custom services for use within CORE.
:return: nothing
"""
# load default services
self.service_errors = ServiceManager.load_locals()
# load custom services
service_paths = self.config.get("custom_services_dir")
logger.debug("custom service paths: %s", service_paths)
if service_paths is not None:
for service_path in service_paths.split(","):
service_path = Path(service_path.strip())
custom_service_errors = ServiceManager.add_services(service_path)
self.service_errors.extend(custom_service_errors)
# load default config services
self.service_manager.load_locals()
# load custom config services
custom_dir = self.config.get("custom_config_services_dir") custom_dir = self.config.get("custom_config_services_dir")
if custom_dir is not None: if custom_dir:
custom_dir = Path(custom_dir)
self.service_manager.load(custom_dir) self.service_manager.load(custom_dir)
def _load_emane(self) -> None: # catch exit event
""" atexit.register(self.shutdown)
Check if emane is installed and load models.
:return: nothing def load_services(self) -> None:
""" # load default services
# check for emane self.service_errors = core.services.load()
path = utils.which("emane", required=False)
self.has_emane = path is not None # load custom services
if not self.has_emane: service_paths = self.config.get("custom_services_dir")
logger.info("emane is not installed, emane functionality disabled") logging.debug("custom service paths: %s", service_paths)
return if service_paths:
# get version for service_path in service_paths.split(","):
emane_version = utils.cmd("emane --version") service_path = service_path.strip()
logger.info("using emane: %s", emane_version) custom_service_errors = ServiceManager.add_services(service_path)
emane_prefix = self.config.get("emane_prefix", DEFAULT_EMANE_PREFIX) self.service_errors.extend(custom_service_errors)
emane_prefix = Path(emane_prefix)
EmaneModelManager.load_locals(emane_prefix)
# load custom models
custom_path = self.config.get("emane_models_dir")
if custom_path is not None:
logger.info("loading custom emane models: %s", custom_path)
custom_path = Path(custom_path)
EmaneModelManager.load(custom_path, emane_prefix)
def shutdown(self) -> None: def shutdown(self) -> None:
""" """
@ -113,12 +87,14 @@ class CoreEmu:
:return: nothing :return: nothing
""" """
logger.info("shutting down all sessions") logging.info("shutting down all sessions")
while self.sessions: sessions = self.sessions.copy()
_, session = self.sessions.popitem() self.sessions.clear()
for _id in sessions:
session = sessions[_id]
session.shutdown() session.shutdown()
def create_session(self, _id: int = None, _cls: type[Session] = Session) -> Session: def create_session(self, _id: int = None, _cls: Type[Session] = Session) -> Session:
""" """
Create a new CORE session. Create a new CORE session.
@ -132,7 +108,7 @@ class CoreEmu:
_id += 1 _id += 1
session = _cls(_id, config=self.config) session = _cls(_id, config=self.config)
session.service_manager = self.service_manager session.service_manager = self.service_manager
logger.info("created session: %s", _id) logging.info("created session: %s", _id)
self.sessions[_id] = session self.sessions[_id] = session
return session return session
@ -143,14 +119,13 @@ class CoreEmu:
:param _id: session id to delete :param _id: session id to delete
:return: True if deleted, False otherwise :return: True if deleted, False otherwise
""" """
logger.info("deleting session: %s", _id) logging.info("deleting session: %s", _id)
session = self.sessions.pop(_id, None) session = self.sessions.pop(_id, None)
result = False result = False
if session: if session:
logger.info("shutting session down: %s", _id) logging.info("shutting session down: %s", _id)
session.data_collect()
session.shutdown() session.shutdown()
result = True result = True
else: else:
logger.error("session to delete did not exist: %s", _id) logging.error("session to delete did not exist: %s", _id)
return result return result

View file

@ -1,22 +1,18 @@
""" """
CORE data objects. CORE data objects.
""" """
from dataclasses import dataclass, field
from typing import TYPE_CHECKING, Any, Optional
import netaddr from dataclasses import dataclass
from typing import List, Tuple
from core import utils
from core.emulator.enumerations import ( from core.emulator.enumerations import (
EventTypes, EventTypes,
ExceptionLevels, ExceptionLevels,
LinkTypes, LinkTypes,
MessageFlags, MessageFlags,
NodeTypes,
) )
if TYPE_CHECKING:
from core.nodes.base import CoreNode, NodeBase
@dataclass @dataclass
class ConfigData: class ConfigData:
@ -24,14 +20,14 @@ class ConfigData:
node: int = None node: int = None
object: str = None object: str = None
type: int = None type: int = None
data_types: tuple[int] = None data_types: Tuple[int] = None
data_values: str = None data_values: str = None
captions: str = None captions: str = None
bitmap: str = None bitmap: str = None
possible_values: str = None possible_values: str = None
groups: str = None groups: str = None
session: int = None session: int = None
iface_id: int = None interface_number: int = None
network_id: int = None network_id: int = None
opaque: str = None opaque: str = None
@ -42,7 +38,7 @@ class EventData:
event_type: EventTypes = None event_type: EventTypes = None
name: str = None name: str = None
data: str = None data: str = None
time: str = None time: float = None
session: int = None session: int = None
@ -71,287 +67,66 @@ class FileData:
compressed_data: str = None compressed_data: str = None
@dataclass
class NodeOptions:
"""
Options for creating and updating nodes within core.
"""
name: str = None
model: Optional[str] = "PC"
canvas: int = None
icon: str = None
services: list[str] = field(default_factory=list)
config_services: list[str] = field(default_factory=list)
x: float = None
y: float = None
lat: float = None
lon: float = None
alt: float = None
server: str = None
image: str = None
emane: str = None
legacy: bool = False
# src, dst
binds: list[tuple[str, str]] = field(default_factory=list)
# src, dst, unique, delete
volumes: list[tuple[str, str, bool, bool]] = field(default_factory=list)
def set_position(self, x: float, y: float) -> None:
"""
Convenience method for setting position.
:param x: x position
:param y: y position
:return: nothing
"""
self.x = x
self.y = y
def set_location(self, lat: float, lon: float, alt: float) -> None:
"""
Convenience method for setting location.
:param lat: latitude
:param lon: longitude
:param alt: altitude
:return: nothing
"""
self.lat = lat
self.lon = lon
self.alt = alt
@dataclass @dataclass
class NodeData: class NodeData:
"""
Node to broadcast.
"""
node: "NodeBase"
message_type: MessageFlags = None message_type: MessageFlags = None
source: str = None
@dataclass
class InterfaceData:
"""
Convenience class for storing interface data.
"""
id: int = None id: int = None
node_type: NodeTypes = None
name: str = None name: str = None
mac: str = None ip_address: str = None
ip4: str = None mac_address: str = None
ip4_mask: int = None ip6_address: str = None
ip6: str = None model: str = None
ip6_mask: int = None emulation_id: int = None
mtu: int = None server: str = None
session: int = None
def get_ips(self) -> list[str]: x_position: float = None
""" y_position: float = None
Returns a list of ip4 and ip6 addresses when present. canvas: int = None
network_id: int = None
:return: list of ip addresses services: List[str] = None
""" latitude: float = None
ips = [] longitude: float = None
if self.ip4 and self.ip4_mask: altitude: float = None
ips.append(f"{self.ip4}/{self.ip4_mask}") icon: str = None
if self.ip6 and self.ip6_mask: opaque: str = None
ips.append(f"{self.ip6}/{self.ip6_mask}") source: str = None
return ips
@dataclass
class LinkOptions:
"""
Options for creating and updating links within core.
"""
delay: int = None
bandwidth: int = None
loss: float = None
dup: int = None
jitter: int = None
mer: int = None
burst: int = None
mburst: int = None
unidirectional: int = None
key: int = None
buffer: int = None
def update(self, options: "LinkOptions") -> bool:
"""
Updates current options with values from other options.
:param options: options to update with
:return: True if any value has changed, False otherwise
"""
changed = False
if options.delay is not None and 0 <= options.delay != self.delay:
self.delay = options.delay
changed = True
if options.bandwidth is not None and 0 <= options.bandwidth != self.bandwidth:
self.bandwidth = options.bandwidth
changed = True
if options.loss is not None and 0 <= options.loss != self.loss:
self.loss = options.loss
changed = True
if options.dup is not None and 0 <= options.dup != self.dup:
self.dup = options.dup
changed = True
if options.jitter is not None and 0 <= options.jitter != self.jitter:
self.jitter = options.jitter
changed = True
if options.buffer is not None and 0 <= options.buffer != self.buffer:
self.buffer = options.buffer
changed = True
return changed
def is_clear(self) -> bool:
"""
Checks if the current option values represent a clear state.
:return: True if the current values should clear, False otherwise
"""
clear = self.delay is None or self.delay <= 0
clear &= self.jitter is None or self.jitter <= 0
clear &= self.loss is None or self.loss <= 0
clear &= self.dup is None or self.dup <= 0
clear &= self.bandwidth is None or self.bandwidth <= 0
clear &= self.buffer is None or self.buffer <= 0
return clear
def __eq__(self, other: Any) -> bool:
"""
Custom logic to check if this link options is equivalent to another.
:param other: other object to check
:return: True if they are both link options with the same values,
False otherwise
"""
if not isinstance(other, LinkOptions):
return False
return (
self.delay == other.delay
and self.jitter == other.jitter
and self.loss == other.loss
and self.dup == other.dup
and self.bandwidth == other.bandwidth
and self.buffer == other.buffer
)
@dataclass @dataclass
class LinkData: class LinkData:
"""
Represents all data associated with a link.
"""
message_type: MessageFlags = None message_type: MessageFlags = None
type: LinkTypes = LinkTypes.WIRED
label: str = None label: str = None
node1_id: int = None node1_id: int = None
node2_id: int = None node2_id: int = None
delay: float = None
bandwidth: float = None
per: float = None
dup: float = None
jitter: float = None
mer: float = None
burst: float = None
session: int = None
mburst: float = None
link_type: LinkTypes = None
gui_attributes: str = None
unidirectional: int = None
emulation_id: int = None
network_id: int = None network_id: int = None
iface1: InterfaceData = None key: int = None
iface2: InterfaceData = None interface1_id: int = None
options: LinkOptions = LinkOptions() interface1_name: str = None
interface1_ip4: str = None
interface1_ip4_mask: int = None
interface1_mac: str = None
interface1_ip6: str = None
interface1_ip6_mask: int = None
interface2_id: int = None
interface2_name: str = None
interface2_ip4: str = None
interface2_ip4_mask: int = None
interface2_mac: str = None
interface2_ip6: str = None
interface2_ip6_mask: int = None
opaque: str = None
color: str = None color: str = None
source: str = None
class IpPrefixes:
"""
Convenience class to help generate IP4 and IP6 addresses for nodes within CORE.
"""
def __init__(self, ip4_prefix: str = None, ip6_prefix: str = None) -> None:
"""
Creates an IpPrefixes object.
:param ip4_prefix: ip4 prefix to use for generation
:param ip6_prefix: ip6 prefix to use for generation
:raises ValueError: when both ip4 and ip6 prefixes have not been provided
"""
if not ip4_prefix and not ip6_prefix:
raise ValueError("ip4 or ip6 must be provided")
self.ip4 = None
if ip4_prefix:
self.ip4 = netaddr.IPNetwork(ip4_prefix)
self.ip6 = None
if ip6_prefix:
self.ip6 = netaddr.IPNetwork(ip6_prefix)
def ip4_address(self, node_id: int) -> str:
"""
Convenience method to return the IP4 address for a node.
:param node_id: node id to get IP4 address for
:return: IP4 address or None
"""
if not self.ip4:
raise ValueError("ip4 prefixes have not been set")
return str(self.ip4[node_id])
def ip6_address(self, node_id: int) -> str:
"""
Convenience method to return the IP6 address for a node.
:param node_id: node id to get IP6 address for
:return: IP4 address or None
"""
if not self.ip6:
raise ValueError("ip6 prefixes have not been set")
return str(self.ip6[node_id])
def gen_iface(self, node_id: int, name: str = None, mac: str = None):
"""
Creates interface data for linking nodes, using the nodes unique id for
generation, along with a random mac address, unless provided.
:param node_id: node id to create an interface for
:param name: name to set for interface, default is eth{id}
:param mac: mac address to use for this interface, default is random
generation
:return: new interface data for the provided node
"""
# generate ip4 data
ip4 = None
ip4_mask = None
if self.ip4:
ip4 = self.ip4_address(node_id)
ip4_mask = self.ip4.prefixlen
# generate ip6 data
ip6 = None
ip6_mask = None
if self.ip6:
ip6 = self.ip6_address(node_id)
ip6_mask = self.ip6.prefixlen
# random mac
if not mac:
mac = utils.random_mac()
return InterfaceData(
name=name, ip4=ip4, ip4_mask=ip4_mask, ip6=ip6, ip6_mask=ip6_mask, mac=mac
)
def create_iface(
self, node: "CoreNode", name: str = None, mac: str = None
) -> InterfaceData:
"""
Creates interface data for linking nodes, using the nodes unique id for
generation, along with a random mac address, unless provided.
:param node: node to create interface for
:param name: name to set for interface, default is eth{id}
:param mac: mac address to use for this interface, default is random
generation
:return: new interface data for the provided node
"""
iface_data = self.gen_iface(node.id, name, mac)
iface_data.id = node.next_iface_id()
return iface_data

View file

@ -6,23 +6,18 @@ import logging
import os import os
import threading import threading
from collections import OrderedDict from collections import OrderedDict
from pathlib import Path
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from typing import TYPE_CHECKING, Callable from typing import TYPE_CHECKING, Callable, Dict, Tuple
import netaddr import netaddr
from fabric import Connection from fabric import Connection
from invoke import UnexpectedExit from invoke import UnexpectedExit
from core import utils from core import utils
from core.emulator.links import CoreLink from core.errors import CoreCommandError
from core.errors import CoreCommandError, CoreError
from core.executables import get_requirements
from core.nodes.interface import GreTap from core.nodes.interface import GreTap
from core.nodes.network import CoreNetwork, CtrlNet from core.nodes.network import CoreNetwork, CtrlNet
logger = logging.getLogger(__name__)
if TYPE_CHECKING: if TYPE_CHECKING:
from core.emulator.session import Session from core.emulator.session import Session
@ -42,13 +37,13 @@ class DistributedServer:
:param name: convenience name to associate with host :param name: convenience name to associate with host
:param host: host to connect to :param host: host to connect to
""" """
self.name: str = name self.name = name
self.host: str = host self.host = host
self.conn: Connection = Connection(host, user="root") self.conn = Connection(host, user="root")
self.lock: threading.Lock = threading.Lock() self.lock = threading.Lock()
def remote_cmd( def remote_cmd(
self, cmd: str, env: dict[str, str] = None, cwd: str = None, wait: bool = True self, cmd: str, env: Dict[str, str] = None, cwd: str = None, wait: bool = True
) -> str: ) -> str:
""" """
Run command remotely using server connection. Run command remotely using server connection.
@ -65,7 +60,7 @@ class DistributedServer:
replace_env = env is not None replace_env = env is not None
if not wait: if not wait:
cmd += " &" cmd += " &"
logger.debug( logging.debug(
"remote cmd server(%s) cwd(%s) wait(%s): %s", self.host, cwd, wait, cmd "remote cmd server(%s) cwd(%s) wait(%s): %s", self.host, cwd, wait, cmd
) )
try: try:
@ -83,31 +78,31 @@ class DistributedServer:
stdout, stderr = e.streams_for_display() stdout, stderr = e.streams_for_display()
raise CoreCommandError(e.result.exited, cmd, stdout, stderr) raise CoreCommandError(e.result.exited, cmd, stdout, stderr)
def remote_put(self, src_path: Path, dst_path: Path) -> None: def remote_put(self, source: str, destination: str) -> None:
""" """
Push file to remote server. Push file to remote server.
:param src_path: source file to push :param source: source file to push
:param dst_path: destination file location :param destination: destination file location
:return: nothing :return: nothing
""" """
with self.lock: with self.lock:
self.conn.put(str(src_path), str(dst_path)) self.conn.put(source, destination)
def remote_put_temp(self, dst_path: Path, data: str) -> None: def remote_put_temp(self, destination: str, data: str) -> None:
""" """
Remote push file contents to a remote server, using a temp file as an Remote push file contents to a remote server, using a temp file as an
intermediate step. intermediate step.
:param dst_path: file destination for data :param destination: file destination for data
:param data: data to store in remote file :param data: data to store in remote file
:return: nothing :return: nothing
""" """
with self.lock: with self.lock:
temp = NamedTemporaryFile(delete=False) temp = NamedTemporaryFile(delete=False)
temp.write(data.encode()) temp.write(data.encode("utf-8"))
temp.close() temp.close()
self.conn.put(temp.name, str(dst_path)) self.conn.put(temp.name, destination)
os.unlink(temp.name) os.unlink(temp.name)
@ -122,10 +117,12 @@ class DistributedController:
:param session: session :param session: session
""" """
self.session: "Session" = session self.session = session
self.servers: dict[str, DistributedServer] = OrderedDict() self.servers = OrderedDict()
self.tunnels: dict[int, tuple[GreTap, GreTap]] = {} self.tunnels = {}
self.address: str = self.session.options.get("distributed_address") self.address = self.session.options.get_config(
"distributed_address", default=None
)
def add_server(self, name: str, host: str) -> None: def add_server(self, name: str, host: str) -> None:
""" """
@ -134,19 +131,10 @@ class DistributedController:
:param name: distributed server name :param name: distributed server name
:param host: distributed server host address :param host: distributed server host address
:return: nothing :return: nothing
:raises CoreError: when there is an error validating server
""" """
server = DistributedServer(name, host) server = DistributedServer(name, host)
for requirement in get_requirements(self.session.use_ovs()):
try:
server.remote_cmd(f"which {requirement}")
except CoreCommandError:
raise CoreError(
f"server({server.name}) failed validation for "
f"command({requirement})"
)
self.servers[name] = server self.servers[name] = server
cmd = f"mkdir -p {self.session.directory}" cmd = f"mkdir -p {self.session.session_dir}"
server.remote_cmd(cmd) server.remote_cmd(cmd)
def execute(self, func: Callable[[DistributedServer], None]) -> None: def execute(self, func: Callable[[DistributedServer], None]) -> None:
@ -172,55 +160,45 @@ class DistributedController:
tunnels = self.tunnels[key] tunnels = self.tunnels[key]
for tunnel in tunnels: for tunnel in tunnels:
tunnel.shutdown() tunnel.shutdown()
# remove all remote session directories # remove all remote session directories
for name in self.servers: for name in self.servers:
server = self.servers[name] server = self.servers[name]
cmd = f"rm -rf {self.session.directory}" cmd = f"rm -rf {self.session.session_dir}"
server.remote_cmd(cmd) server.remote_cmd(cmd)
# clear tunnels # clear tunnels
self.tunnels.clear() self.tunnels.clear()
def start(self) -> None: def start(self) -> None:
""" """
Start distributed network tunnels for control networks. Start distributed network tunnels.
:return: nothing :return: nothing
""" """
mtu = self.session.options.get_int("mtu") for node_id in self.session.nodes:
for node in self.session.nodes.values(): node = self.session.nodes[node_id]
if not isinstance(node, CtrlNet) or node.serverintf is not None:
if not isinstance(node, CoreNetwork):
continue continue
if isinstance(node, CtrlNet) and node.serverintf is not None:
continue
for name in self.servers: for name in self.servers:
server = self.servers[name] server = self.servers[name]
self.create_gre_tunnel(node, server, mtu, True) self.create_gre_tunnel(node, server)
def create_gre_tunnels(self, core_link: CoreLink) -> None:
"""
Creates gre tunnels for a core link with a ptp network connection.
:param core_link: core link to create gre tunnel for
:return: nothing
"""
if not self.servers:
return
if not core_link.ptp:
raise CoreError(
"attempted to create gre tunnel for core link without a ptp network"
)
mtu = self.session.options.get_int("mtu")
for server in self.servers.values():
self.create_gre_tunnel(core_link.ptp, server, mtu, True)
def create_gre_tunnel( def create_gre_tunnel(
self, node: CoreNetwork, server: DistributedServer, mtu: int, start: bool self, node: CoreNetwork, server: DistributedServer
) -> tuple[GreTap, GreTap]: ) -> Tuple[GreTap, GreTap]:
""" """
Create gre tunnel using a pair of gre taps between the local and remote server. Create gre tunnel using a pair of gre taps between the local and remote server.
:param node: node to create gre tunnel for :param node: node to create gre tunnel for
:param server: server to create tunnel for :param server: server to create
:param mtu: mtu for gre taps tunnel for
:param start: True to start gre taps, False otherwise
:return: local and remote gre taps created for tunnel :return: local and remote gre taps created for tunnel
""" """
host = server.host host = server.host
@ -228,39 +206,52 @@ class DistributedController:
tunnel = self.tunnels.get(key) tunnel = self.tunnels.get(key)
if tunnel is not None: if tunnel is not None:
return tunnel return tunnel
# local to server # local to server
logger.info("local tunnel node(%s) to remote(%s) key(%s)", node.name, host, key) logging.info(
local_tap = GreTap(self.session, host, key=key, mtu=mtu) "local tunnel node(%s) to remote(%s) key(%s)", node.name, host, key
if start: )
local_tap.startup() local_tap = GreTap(session=self.session, remoteip=host, key=key)
local_tap.net_client.set_iface_master(node.brname, local_tap.localname) local_tap.net_client.create_interface(node.brname, local_tap.localname)
# server to local # server to local
logger.info( logging.info(
"remote tunnel node(%s) to local(%s) key(%s)", node.name, self.address, key "remote tunnel node(%s) to local(%s) key(%s)", node.name, self.address, key
) )
remote_tap = GreTap(self.session, self.address, key=key, server=server, mtu=mtu) remote_tap = GreTap(
if start: session=self.session, remoteip=self.address, key=key, server=server
remote_tap.startup() )
remote_tap.net_client.set_iface_master(node.brname, remote_tap.localname) remote_tap.net_client.create_interface(node.brname, remote_tap.localname)
# save tunnels for shutdown # save tunnels for shutdown
tunnel = (local_tap, remote_tap) tunnel = (local_tap, remote_tap)
self.tunnels[key] = tunnel self.tunnels[key] = tunnel
return tunnel return tunnel
def tunnel_key(self, node1_id: int, node2_id: int) -> int: def tunnel_key(self, n1_id: int, n2_id: int) -> int:
""" """
Compute a 32-bit key used to uniquely identify a GRE tunnel. Compute a 32-bit key used to uniquely identify a GRE tunnel.
The hash(n1num), hash(n2num) values are used, so node numbers may be The hash(n1num), hash(n2num) values are used, so node numbers may be
None or string values (used for e.g. "ctrlnet"). None or string values (used for e.g. "ctrlnet").
:param node1_id: node one id :param n1_id: node one id
:param node2_id: node two id :param n2_id: node two id
:return: tunnel key for the node pair :return: tunnel key for the node pair
""" """
logger.debug("creating tunnel key for: %s, %s", node1_id, node2_id) logging.debug("creating tunnel key for: %s, %s", n1_id, n2_id)
key = ( key = (
(self.session.id << 16) (self.session.id << 16) ^ utils.hashkey(n1_id) ^ (utils.hashkey(n2_id) << 8)
^ utils.hashkey(node1_id)
^ (utils.hashkey(node2_id) << 8)
) )
return key & 0xFFFFFFFF return key & 0xFFFFFFFF
def get_tunnel(self, n1_id: int, n2_id: int) -> Tuple[GreTap, GreTap]:
"""
Return the GreTap between two nodes if it exists.
:param n1_id: node one id
:param n2_id: node two id
:return: gre tap between nodes or None
"""
key = self.tunnel_key(n1_id, n2_id)
logging.debug("checking for tunnel key(%s) in: %s", key, self.tunnels)
return self.tunnels.get(key)

View file

@ -0,0 +1,335 @@
from typing import List, Optional
import netaddr
from core import utils
from core.api.grpc.core_pb2 import LinkOptions
from core.emane.nodes import EmaneNet
from core.emulator.enumerations import LinkTypes
from core.nodes.base import CoreNetworkBase, CoreNode
from core.nodes.interface import CoreInterface
from core.nodes.physical import PhysicalNode
class IdGen:
def __init__(self, _id: int = 0) -> None:
self.id = _id
def next(self) -> int:
self.id += 1
return self.id
def link_config(
network: CoreNetworkBase,
interface: CoreInterface,
link_options: LinkOptions,
devname: str = None,
interface_two: CoreInterface = None,
) -> None:
"""
Convenience method for configuring a link,
:param network: network to configure link for
:param interface: interface to configure
:param link_options: data to configure link with
:param devname: device name, default is None
:param interface_two: other interface associated, default is None
:return: nothing
"""
config = {
"netif": interface,
"bw": link_options.bandwidth,
"delay": link_options.delay,
"loss": link_options.per,
"duplicate": link_options.dup,
"jitter": link_options.jitter,
"netif2": interface_two,
}
# hacky check here, because physical and emane nodes do not conform to the same
# linkconfig interface
if not isinstance(network, (EmaneNet, PhysicalNode)):
config["devname"] = devname
network.linkconfig(**config)
class NodeOptions:
"""
Options for creating and updating nodes within core.
"""
def __init__(self, name: str = None, model: str = "PC", image: str = None) -> None:
"""
Create a NodeOptions object.
:param name: name of node, defaults to node class name postfix with its id
:param model: defines services for default and physical nodes, defaults to
"router"
:param image: image to use for docker nodes
"""
self.name = name
self.model = model
self.canvas = None
self.icon = None
self.opaque = None
self.services = []
self.config_services = []
self.x = None
self.y = None
self.lat = None
self.lon = None
self.alt = None
self.emulation_id = None
self.server = None
self.image = image
self.emane = None
def set_position(self, x: float, y: float) -> None:
"""
Convenience method for setting position.
:param x: x position
:param y: y position
:return: nothing
"""
self.x = x
self.y = y
def set_location(self, lat: float, lon: float, alt: float) -> None:
"""
Convenience method for setting location.
:param lat: latitude
:param lon: longitude
:param alt: altitude
:return: nothing
"""
self.lat = lat
self.lon = lon
self.alt = alt
class LinkOptions:
"""
Options for creating and updating links within core.
"""
def __init__(self, _type: LinkTypes = LinkTypes.WIRED) -> None:
"""
Create a LinkOptions object.
:param _type: type of link, defaults to
wired
"""
self.type = _type
self.session = None
self.delay = None
self.bandwidth = None
self.per = None
self.dup = None
self.jitter = None
self.mer = None
self.burst = None
self.mburst = None
self.gui_attributes = None
self.unidirectional = None
self.emulation_id = None
self.network_id = None
self.key = None
self.opaque = None
class InterfaceData:
"""
Convenience class for storing interface data.
"""
def __init__(
self,
_id: int,
name: str,
mac: str,
ip4: str,
ip4_mask: int,
ip6: str,
ip6_mask: int,
) -> None:
"""
Creates an InterfaceData object.
:param _id: interface id
:param name: name for interface
:param mac: mac address
:param ip4: ipv4 address
:param ip4_mask: ipv4 bit mask
:param ip6: ipv6 address
:param ip6_mask: ipv6 bit mask
"""
self.id = _id
self.name = name
self.mac = mac
self.ip4 = ip4
self.ip4_mask = ip4_mask
self.ip6 = ip6
self.ip6_mask = ip6_mask
def has_ip4(self) -> bool:
"""
Determines if interface has an ip4 address.
:return: True if has ip4, False otherwise
"""
return all([self.ip4, self.ip4_mask])
def has_ip6(self) -> bool:
"""
Determines if interface has an ip6 address.
:return: True if has ip6, False otherwise
"""
return all([self.ip6, self.ip6_mask])
def ip4_address(self) -> Optional[str]:
"""
Retrieve a string representation of the ip4 address and netmask.
:return: ip4 string or None
"""
if self.has_ip4():
return f"{self.ip4}/{self.ip4_mask}"
else:
return None
def ip6_address(self) -> Optional[str]:
"""
Retrieve a string representation of the ip6 address and netmask.
:return: ip4 string or None
"""
if self.has_ip6():
return f"{self.ip6}/{self.ip6_mask}"
else:
return None
def get_addresses(self) -> List[str]:
"""
Returns a list of ip4 and ip6 address when present.
:return: list of addresses
"""
ip4 = self.ip4_address()
ip6 = self.ip6_address()
return [i for i in [ip4, ip6] if i]
class IpPrefixes:
"""
Convenience class to help generate IP4 and IP6 addresses for nodes within CORE.
"""
def __init__(self, ip4_prefix: str = None, ip6_prefix: str = None) -> None:
"""
Creates an IpPrefixes object.
:param ip4_prefix: ip4 prefix to use for generation
:param ip6_prefix: ip6 prefix to use for generation
:raises ValueError: when both ip4 and ip6 prefixes have not been provided
"""
if not ip4_prefix and not ip6_prefix:
raise ValueError("ip4 or ip6 must be provided")
self.ip4 = None
if ip4_prefix:
self.ip4 = netaddr.IPNetwork(ip4_prefix)
self.ip6 = None
if ip6_prefix:
self.ip6 = netaddr.IPNetwork(ip6_prefix)
def ip4_address(self, node: CoreNode) -> str:
"""
Convenience method to return the IP4 address for a node.
:param node: node to get IP4 address for
:return: IP4 address or None
"""
if not self.ip4:
raise ValueError("ip4 prefixes have not been set")
return str(self.ip4[node.id])
def ip6_address(self, node: CoreNode) -> str:
"""
Convenience method to return the IP6 address for a node.
:param node: node to get IP6 address for
:return: IP4 address or None
"""
if not self.ip6:
raise ValueError("ip6 prefixes have not been set")
return str(self.ip6[node.id])
def create_interface(
self, node: CoreNode, name: str = None, mac: str = None
) -> InterfaceData:
"""
Creates interface data for linking nodes, using the nodes unique id for
generation, along with a random mac address, unless provided.
:param node: node to create interface for
:param name: name to set for interface, default is eth{id}
:param mac: mac address to use for this interface, default is random
generation
:return: new interface data for the provided node
"""
# interface id
inteface_id = node.newifindex()
# generate ip4 data
ip4 = None
ip4_mask = None
if self.ip4:
ip4 = self.ip4_address(node)
ip4_mask = self.ip4.prefixlen
# generate ip6 data
ip6 = None
ip6_mask = None
if self.ip6:
ip6 = self.ip6_address(node)
ip6_mask = self.ip6.prefixlen
# random mac
if not mac:
mac = utils.random_mac()
return InterfaceData(
_id=inteface_id,
name=name,
ip4=ip4,
ip4_mask=ip4_mask,
ip6=ip6,
ip6_mask=ip6_mask,
mac=mac,
)
def create_interface(
node: CoreNode, network: CoreNetworkBase, interface_data: InterfaceData
):
"""
Create an interface for a node on a network using provided interface data.
:param node: node to create interface for
:param network: network to associate interface with
:param interface_data: interface data
:return: created interface
"""
node.newnetif(
network,
addrlist=interface_data.get_addresses(),
hwaddr=interface_data.mac,
ifindex=interface_data.id,
ifname=interface_data.name,
)
return node.netif(interface_data.id)

View file

@ -20,17 +20,6 @@ class MessageFlags(Enum):
TTY = 0x40 TTY = 0x40
class ConfigFlags(Enum):
"""
Configuration flags.
"""
NONE = 0x00
REQUEST = 0x01
UPDATE = 0x02
RESET = 0x03
class NodeTypes(Enum): class NodeTypes(Enum):
""" """
Node types. Node types.
@ -49,8 +38,6 @@ class NodeTypes(Enum):
CONTROL_NET = 13 CONTROL_NET = 13
DOCKER = 15 DOCKER = 15
LXC = 16 LXC = 16
WIRELESS = 17
PODMAN = 18
class LinkTypes(Enum): class LinkTypes(Enum):
@ -119,9 +106,6 @@ class EventTypes(Enum):
def should_start(self) -> bool: def should_start(self) -> bool:
return self.value > self.DEFINITION_STATE.value return self.value > self.DEFINITION_STATE.value
def already_collected(self) -> bool:
return self.value >= self.DATACOLLECT_STATE.value
class ExceptionLevels(Enum): class ExceptionLevels(Enum):
""" """
@ -133,13 +117,3 @@ class ExceptionLevels(Enum):
ERROR = 2 ERROR = 2
WARNING = 3 WARNING = 3
NOTICE = 4 NOTICE = 4
class NetworkPolicy(Enum):
ACCEPT = "ACCEPT"
DROP = "DROP"
class TransportType(Enum):
RAW = "raw"
VIRTUAL = "virtual"

View file

@ -1,145 +0,0 @@
import logging
import subprocess
from collections.abc import Callable
from pathlib import Path
from core.emulator.enumerations import EventTypes
from core.errors import CoreError
logger = logging.getLogger(__name__)
class HookManager:
"""
Provides functionality for managing and running script/callback hooks.
"""
def __init__(self) -> None:
"""
Create a HookManager instance.
"""
self.script_hooks: dict[EventTypes, dict[str, str]] = {}
self.callback_hooks: dict[EventTypes, list[Callable[[], None]]] = {}
def reset(self) -> None:
"""
Clear all current hooks.
:return: nothing
"""
self.script_hooks.clear()
self.callback_hooks.clear()
def add_script_hook(self, state: EventTypes, file_name: str, data: str) -> None:
"""
Add a hook script to run for a given state.
:param state: state to run hook on
:param file_name: hook file name
:param data: file data
:return: nothing
"""
logger.info("setting state hook: %s - %s", state, file_name)
state_hooks = self.script_hooks.setdefault(state, {})
if file_name in state_hooks:
raise CoreError(
f"adding duplicate state({state.name}) hook script({file_name})"
)
state_hooks[file_name] = data
def delete_script_hook(self, state: EventTypes, file_name: str) -> None:
"""
Delete a script hook from a given state.
:param state: state to delete script hook from
:param file_name: name of script to delete
:return: nothing
"""
state_hooks = self.script_hooks.get(state, {})
if file_name not in state_hooks:
raise CoreError(
f"deleting state({state.name}) hook script({file_name}) "
"that does not exist"
)
del state_hooks[file_name]
def add_callback_hook(
self, state: EventTypes, hook: Callable[[EventTypes], None]
) -> None:
"""
Add a hook callback to run for a state.
:param state: state to add hook for
:param hook: callback to run
:return: nothing
"""
hooks = self.callback_hooks.setdefault(state, [])
if hook in hooks:
name = getattr(callable, "__name__", repr(hook))
raise CoreError(
f"adding duplicate state({state.name}) hook callback({name})"
)
hooks.append(hook)
def delete_callback_hook(
self, state: EventTypes, hook: Callable[[EventTypes], None]
) -> None:
"""
Delete a state hook.
:param state: state to delete hook for
:param hook: hook to delete
:return: nothing
"""
hooks = self.callback_hooks.get(state, [])
if hook not in hooks:
name = getattr(callable, "__name__", repr(hook))
raise CoreError(
f"deleting state({state.name}) hook callback({name}) "
"that does not exist"
)
hooks.remove(hook)
def run_hooks(
self, state: EventTypes, directory: Path, env: dict[str, str]
) -> None:
"""
Run all hooks for the current state.
:param state: state to run hooks for
:param directory: directory to run script hooks within
:param env: environment to run script hooks with
:return: nothing
"""
for state_hooks in self.script_hooks.get(state, {}):
for file_name, data in state_hooks.items():
logger.info("running hook %s", file_name)
file_path = directory / file_name
log_path = directory / f"{file_name}.log"
try:
with file_path.open("w") as f:
f.write(data)
with log_path.open("w") as f:
args = ["/bin/sh", file_name]
subprocess.check_call(
args,
stdout=f,
stderr=subprocess.STDOUT,
close_fds=True,
cwd=directory,
env=env,
)
except (OSError, subprocess.CalledProcessError) as e:
raise CoreError(
f"failure running state({state.name}) "
f"hook script({file_name}): {e}"
)
for hook in self.callback_hooks.get(state, []):
try:
hook()
except Exception as e:
name = getattr(callable, "__name__", repr(hook))
raise CoreError(
f"failure running state({state.name}) "
f"hook callback({name}): {e}"
)

View file

@ -1,257 +0,0 @@
"""
Provides functionality for maintaining information about known links
for a session.
"""
import logging
from collections.abc import ValuesView
from dataclasses import dataclass
from typing import Optional
from core.emulator.data import LinkData, LinkOptions
from core.emulator.enumerations import LinkTypes, MessageFlags
from core.errors import CoreError
from core.nodes.base import NodeBase
from core.nodes.interface import CoreInterface
from core.nodes.network import PtpNet
logger = logging.getLogger(__name__)
LinkKeyType = tuple[int, Optional[int], int, Optional[int]]
def create_key(
node1: NodeBase,
iface1: Optional[CoreInterface],
node2: NodeBase,
iface2: Optional[CoreInterface],
) -> LinkKeyType:
"""
Creates a unique key for tracking links.
:param node1: first node in link
:param iface1: node1 interface
:param node2: second node in link
:param iface2: node2 interface
:return: link key
"""
iface1_id = iface1.id if iface1 else None
iface2_id = iface2.id if iface2 else None
if node1.id < node2.id:
return node1.id, iface1_id, node2.id, iface2_id
else:
return node2.id, iface2_id, node1.id, iface1_id
@dataclass
class CoreLink:
"""
Provides a core link data structure.
"""
node1: NodeBase
iface1: Optional[CoreInterface]
node2: NodeBase
iface2: Optional[CoreInterface]
ptp: PtpNet = None
label: str = None
color: str = None
def key(self) -> LinkKeyType:
"""
Retrieve the key for this link.
:return: link key
"""
return create_key(self.node1, self.iface1, self.node2, self.iface2)
def is_unidirectional(self) -> bool:
"""
Checks if this link is considered unidirectional, due to current
iface configurations.
:return: True if unidirectional, False otherwise
"""
unidirectional = False
if self.iface1 and self.iface2:
unidirectional = self.iface1.options != self.iface2.options
return unidirectional
def options(self) -> LinkOptions:
"""
Retrieve the options for this link.
:return: options for this link
"""
if self.is_unidirectional():
options = self.iface1.options
else:
if self.iface1:
options = self.iface1.options
else:
options = self.iface2.options
return options
def get_data(self, message_type: MessageFlags, source: str = None) -> LinkData:
"""
Create link data for this link.
:param message_type: link data message type
:param source: source for this data
:return: link data
"""
iface1_data = self.iface1.get_data() if self.iface1 else None
iface2_data = self.iface2.get_data() if self.iface2 else None
return LinkData(
message_type=message_type,
type=LinkTypes.WIRED,
node1_id=self.node1.id,
node2_id=self.node2.id,
iface1=iface1_data,
iface2=iface2_data,
options=self.options(),
label=self.label,
color=self.color,
source=source,
)
def get_data_unidirectional(self, source: str = None) -> LinkData:
"""
Create other unidirectional link data.
:param source: source for this data
:return: unidirectional link data
"""
iface1_data = self.iface1.get_data() if self.iface1 else None
iface2_data = self.iface2.get_data() if self.iface2 else None
return LinkData(
message_type=MessageFlags.NONE,
type=LinkTypes.WIRED,
node1_id=self.node2.id,
node2_id=self.node1.id,
iface1=iface2_data,
iface2=iface1_data,
options=self.iface2.options,
label=self.label,
color=self.color,
source=source,
)
class LinkManager:
"""
Provides core link management.
"""
def __init__(self) -> None:
"""
Create a LinkManager instance.
"""
self._links: dict[LinkKeyType, CoreLink] = {}
self._node_links: dict[int, dict[LinkKeyType, CoreLink]] = {}
def add(self, core_link: CoreLink) -> None:
"""
Add a core link to be tracked.
:param core_link: link to track
:return: nothing
"""
node1, iface1 = core_link.node1, core_link.iface1
node2, iface2 = core_link.node2, core_link.iface2
if core_link.key() in self._links:
raise CoreError(
f"node1({node1.name}) iface1({iface1.id}) "
f"node2({node2.name}) iface2({iface2.id}) link already exists"
)
logger.info(
"adding link from node(%s:%s) to node(%s:%s)",
node1.name,
iface1.name if iface1 else None,
node2.name,
iface2.name if iface2 else None,
)
self._links[core_link.key()] = core_link
node1_links = self._node_links.setdefault(node1.id, {})
node1_links[core_link.key()] = core_link
node2_links = self._node_links.setdefault(node2.id, {})
node2_links[core_link.key()] = core_link
def delete(
self,
node1: NodeBase,
iface1: Optional[CoreInterface],
node2: NodeBase,
iface2: Optional[CoreInterface],
) -> CoreLink:
"""
Remove a link from being tracked.
:param node1: first node in link
:param iface1: node1 interface
:param node2: second node in link
:param iface2: node2 interface
:return: removed core link
"""
key = create_key(node1, iface1, node2, iface2)
if key not in self._links:
raise CoreError(
f"node1({node1.name}) iface1({iface1.id}) "
f"node2({node2.name}) iface2({iface2.id}) is not linked"
)
logger.info(
"deleting link from node(%s:%s) to node(%s:%s)",
node1.name,
iface1.name if iface1 else None,
node2.name,
iface2.name if iface2 else None,
)
node1_links = self._node_links[node1.id]
node1_links.pop(key)
node2_links = self._node_links[node2.id]
node2_links.pop(key)
return self._links.pop(key)
def reset(self) -> None:
"""
Resets and clears all tracking information.
:return: nothing
"""
self._links.clear()
self._node_links.clear()
def get_link(
self,
node1: NodeBase,
iface1: Optional[CoreInterface],
node2: NodeBase,
iface2: Optional[CoreInterface],
) -> Optional[CoreLink]:
"""
Retrieve a link for provided values.
:param node1: first node in link
:param iface1: interface for node1
:param node2: second node in link
:param iface2: interface for node2
:return: core link if present, None otherwise
"""
key = create_key(node1, iface1, node2, iface2)
return self._links.get(key)
def links(self) -> ValuesView[CoreLink]:
"""
Retrieve all known links
:return: iterator for all known links
"""
return self._links.values()
def node_links(self, node: NodeBase) -> ValuesView[CoreLink]:
"""
Retrieve all links for a given node.
:param node: node to get links for
:return: node links
"""
return self._node_links.get(node.id, {}).values()

File diff suppressed because it is too large Load diff

View file

@ -1,87 +1,90 @@
from typing import Optional from typing import Any
from core.config import ConfigBool, ConfigInt, ConfigString, Configuration from core.config import ConfigurableManager, ConfigurableOptions, Configuration
from core.errors import CoreError from core.emulator.enumerations import ConfigDataTypes, RegisterTlvs
from core.plugins.sdt import Sdt from core.plugins.sdt import Sdt
class SessionConfig: class SessionConfig(ConfigurableManager, ConfigurableOptions):
""" """
Provides session configuration. Provides session configuration.
""" """
options: list[Configuration] = [ name = "session"
ConfigString(id="controlnet", label="Control Network"), options = [
ConfigString(id="controlnet0", label="Control Network 0"), Configuration(
ConfigString(id="controlnet1", label="Control Network 1"), _id="controlnet", _type=ConfigDataTypes.STRING, label="Control Network"
ConfigString(id="controlnet2", label="Control Network 2"),
ConfigString(id="controlnet3", label="Control Network 3"),
ConfigString(id="controlnet_updown_script", label="Control Network Script"),
ConfigBool(id="enablerj45", default="1", label="Enable RJ45s"),
ConfigBool(id="preservedir", default="0", label="Preserve session dir"),
ConfigBool(id="enablesdt", default="0", label="Enable SDT3D output"),
ConfigString(id="sdturl", default=Sdt.DEFAULT_SDT_URL, label="SDT3D URL"),
ConfigBool(id="ovs", default="0", label="Enable OVS"),
ConfigInt(id="platform_id_start", default="1", label="EMANE Platform ID Start"),
ConfigInt(id="nem_id_start", default="1", label="EMANE NEM ID Start"),
ConfigBool(id="link_enabled", default="1", label="EMANE Links?"),
ConfigInt(
id="loss_threshold", default="30", label="EMANE Link Loss Threshold (%)"
), ),
ConfigInt( Configuration(
id="link_interval", default="1", label="EMANE Link Check Interval (sec)" _id="controlnet0", _type=ConfigDataTypes.STRING, label="Control Network 0"
),
Configuration(
_id="controlnet1", _type=ConfigDataTypes.STRING, label="Control Network 1"
),
Configuration(
_id="controlnet2", _type=ConfigDataTypes.STRING, label="Control Network 2"
),
Configuration(
_id="controlnet3", _type=ConfigDataTypes.STRING, label="Control Network 3"
),
Configuration(
_id="controlnet_updown_script",
_type=ConfigDataTypes.STRING,
label="Control Network Script",
),
Configuration(
_id="enablerj45",
_type=ConfigDataTypes.BOOL,
default="1",
label="Enable RJ45s",
),
Configuration(
_id="preservedir",
_type=ConfigDataTypes.BOOL,
default="0",
label="Preserve session dir",
),
Configuration(
_id="enablesdt",
_type=ConfigDataTypes.BOOL,
default="0",
label="Enable SDT3D output",
),
Configuration(
_id="sdturl",
_type=ConfigDataTypes.STRING,
default=Sdt.DEFAULT_SDT_URL,
label="SDT3D URL",
), ),
ConfigInt(id="link_timeout", default="4", label="EMANE Link Timeout (sec)"),
ConfigInt(id="mtu", default="0", label="MTU for All Devices"),
] ]
config_type = RegisterTlvs.UTILITY
def __init__(self, config: dict[str, str] = None) -> None: def __init__(self) -> None:
super().__init__()
self.set_configs(self.default_values())
def get_config(
self,
_id: str,
node_id: int = ConfigurableManager._default_node,
config_type: str = ConfigurableManager._default_type,
default: Any = None,
) -> str:
""" """
Create a SessionConfig instance. Retrieves a specific configuration for a node and configuration type.
:param config: configuration to initialize with :param _id: specific configuration to retrieve
:param node_id: node id to store configuration for
:param config_type: configuration type to store configuration for
:param default: default value to return when value is not found
:return: configuration value
""" """
self._config: dict[str, str] = {x.id: x.default for x in self.options} value = super().get_config(_id, node_id, config_type, default)
self._config.update(config or {}) if value == "":
value = default
return value
def update(self, config: dict[str, str]) -> None: def get_config_bool(self, name: str, default: Any = None) -> bool:
"""
Update current configuration with provided values.
:param config: configuration to update with
:return: nothing
"""
self._config.update(config)
def set(self, name: str, value: str) -> None:
"""
Set a configuration value.
:param name: name of configuration to set
:param value: value to set
:return: nothing
"""
self._config[name] = value
def get(self, name: str, default: str = None) -> Optional[str]:
"""
Retrieve configuration value.
:param name: name of configuration to get
:param default: value to return as default
:return: return found configuration value or default
"""
return self._config.get(name, default)
def all(self) -> dict[str, str]:
"""
Retrieve all configuration options.
:return: configuration value dict
"""
return self._config
def get_bool(self, name: str, default: bool = None) -> bool:
""" """
Get configuration value as a boolean. Get configuration value as a boolean.
@ -89,15 +92,12 @@ class SessionConfig:
:param default: default value if not found :param default: default value if not found
:return: boolean for configuration value :return: boolean for configuration value
""" """
value = self._config.get(name) value = self.get_config(name)
if value is None and default is None:
raise CoreError(f"missing session options for {name}")
if value is None: if value is None:
return default return default
else: return value.lower() == "true"
return value.lower() == "true"
def get_int(self, name: str, default: int = None) -> int: def get_config_int(self, name: str, default: Any = None) -> int:
""" """
Get configuration value as int. Get configuration value as int.
@ -105,10 +105,7 @@ class SessionConfig:
:param default: default value if not found :param default: default value if not found
:return: int for configuration value :return: int for configuration value
""" """
value = self._config.get(name) value = self.get_config(name, default=default)
if value is None and default is None: if value is not None:
raise CoreError(f"missing session options for {name}") value = int(value)
if value is None: return value
return default
else:
return int(value)

View file

@ -11,7 +11,7 @@ class CoreCommandError(subprocess.CalledProcessError):
def __str__(self) -> str: def __str__(self) -> str:
return ( return (
f"command({self.cmd}), status({self.returncode}):\n" f"Command({self.cmd}), Status({self.returncode}):\n"
f"stdout: {self.output}\nstderr: {self.stderr}" f"stdout: {self.output}\nstderr: {self.stderr}"
) )
@ -30,27 +30,3 @@ class CoreXmlError(Exception):
""" """
pass pass
class CoreServiceError(Exception):
"""
Used when there is an error related to accessing a service.
"""
pass
class CoreServiceBootError(Exception):
"""
Used when there is an error booting a service.
"""
pass
class CoreConfigError(Exception):
"""
Used when there is an error defining a configurable option.
"""
pass

View file

@ -1,40 +0,0 @@
BASH: str = "bash"
ETHTOOL: str = "ethtool"
IP: str = "ip"
MOUNT: str = "mount"
NFTABLES: str = "nft"
OVS_VSCTL: str = "ovs-vsctl"
SYSCTL: str = "sysctl"
TC: str = "tc"
TEST: str = "test"
UMOUNT: str = "umount"
VCMD: str = "vcmd"
VNODED: str = "vnoded"
COMMON_REQUIREMENTS: list[str] = [
BASH,
ETHTOOL,
IP,
MOUNT,
NFTABLES,
SYSCTL,
TC,
TEST,
UMOUNT,
VCMD,
VNODED,
]
OVS_REQUIREMENTS: list[str] = [OVS_VSCTL]
def get_requirements(use_ovs: bool) -> list[str]:
"""
Retrieve executable requirements needed to run CORE.
:param use_ovs: True if OVS is being used, False otherwise
:return: list of executable requirements
"""
requirements = COMMON_REQUIREMENTS
if use_ovs:
requirements += OVS_REQUIREMENTS
return requirements

View file

@ -1,66 +1,58 @@
import logging import logging
import math import math
import tkinter as tk import tkinter as tk
from tkinter import PhotoImage, font, messagebox, ttk from tkinter import font, ttk
from tkinter.ttk import Progressbar from tkinter.ttk import Progressbar
from typing import Any, Optional
import grpc import grpc
from core.gui import appconfig, images from core.gui import appconfig, themes
from core.gui import nodeutils as nutils
from core.gui import themes
from core.gui.appconfig import GuiConfig
from core.gui.coreclient import CoreClient from core.gui.coreclient import CoreClient
from core.gui.dialogs.error import ErrorDialog from core.gui.dialogs.error import ErrorDialog
from core.gui.frames.base import InfoFrameBase from core.gui.graph.graph import CanvasGraph
from core.gui.frames.default import DefaultInfoFrame from core.gui.images import ImageEnum, Images
from core.gui.graph.manager import CanvasManager
from core.gui.images import ImageEnum
from core.gui.menubar import Menubar from core.gui.menubar import Menubar
from core.gui.nodeutils import NodeUtils
from core.gui.statusbar import StatusBar from core.gui.statusbar import StatusBar
from core.gui.themes import PADY
from core.gui.toolbar import Toolbar from core.gui.toolbar import Toolbar
from core.gui.validation import InputValidation
logger = logging.getLogger(__name__) WIDTH = 1000
WIDTH: int = 1000 HEIGHT = 800
HEIGHT: int = 800
class Application(ttk.Frame): class Application(ttk.Frame):
def __init__(self, proxy: bool, session_id: int = None) -> None: def __init__(self, proxy: bool):
super().__init__() super().__init__(master=None)
# load node icons # load node icons
nutils.setup() NodeUtils.setup()
# widgets # widgets
self.menubar: Optional[Menubar] = None self.menubar = None
self.toolbar: Optional[Toolbar] = None self.toolbar = None
self.right_frame: Optional[ttk.Frame] = None self.right_frame = None
self.manager: Optional[CanvasManager] = None self.canvas = None
self.statusbar: Optional[StatusBar] = None self.statusbar = None
self.progress: Optional[Progressbar] = None self.validation = None
self.infobar: Optional[ttk.Frame] = None self.progress = None
self.info_frame: Optional[InfoFrameBase] = None
self.show_infobar: tk.BooleanVar = tk.BooleanVar(value=False)
# fonts # fonts
self.fonts_size: dict[str, int] = {} self.fonts_size = None
self.icon_text_font: Optional[font.Font] = None self.icon_text_font = None
self.edge_font: Optional[font.Font] = None self.edge_font = None
# setup # setup
self.guiconfig: GuiConfig = appconfig.read() self.guiconfig = appconfig.read()
self.app_scale: float = self.guiconfig.scale self.app_scale = self.guiconfig["scale"]
self.setup_scaling() self.setup_scaling()
self.style: ttk.Style = ttk.Style() self.style = ttk.Style()
self.setup_theme() self.setup_theme()
self.core: CoreClient = CoreClient(self, proxy) self.core = CoreClient(self, proxy)
self.setup_app() self.setup_app()
self.draw() self.draw()
self.core.setup(session_id) self.core.setup()
def setup_scaling(self) -> None: def setup_scaling(self):
self.fonts_size = {name: font.nametofont(name)["size"] for name in font.names()} self.fonts_size = {name: font.nametofont(name)["size"] for name in font.names()}
text_scale = self.app_scale if self.app_scale < 1 else math.sqrt(self.app_scale) text_scale = self.app_scale if self.app_scale < 1 else math.sqrt(self.app_scale)
themes.scale_fonts(self.fonts_size, self.app_scale) themes.scale_fonts(self.fonts_size, self.app_scale)
@ -69,37 +61,22 @@ class Application(ttk.Frame):
family="TkDefaultFont", size=int(8 * text_scale), weight=font.BOLD family="TkDefaultFont", size=int(8 * text_scale), weight=font.BOLD
) )
def setup_theme(self) -> None: def setup_theme(self):
themes.load(self.style) themes.load(self.style)
self.master.bind_class("Menu", "<<ThemeChanged>>", themes.theme_change_menu) self.master.bind_class("Menu", "<<ThemeChanged>>", themes.theme_change_menu)
self.master.bind("<<ThemeChanged>>", themes.theme_change) self.master.bind("<<ThemeChanged>>", themes.theme_change)
self.style.theme_use(self.guiconfig.preferences.theme) self.style.theme_use(self.guiconfig["preferences"]["theme"])
def setup_app(self) -> None: def setup_app(self):
self.master.title("CORE") self.master.title("CORE")
self.center() self.center()
self.master.protocol("WM_DELETE_WINDOW", self.on_closing) self.master.protocol("WM_DELETE_WINDOW", self.on_closing)
image = images.from_enum(ImageEnum.CORE, width=images.DIALOG_SIZE) image = Images.get(ImageEnum.CORE, 16)
self.master.tk.call("wm", "iconphoto", self.master._w, image) self.master.tk.call("wm", "iconphoto", self.master._w, image)
self.validation = InputValidation(self)
self.master.option_add("*tearOff", tk.FALSE) self.master.option_add("*tearOff", tk.FALSE)
self.setup_file_dialogs()
def setup_file_dialogs(self) -> None: def center(self):
"""
Hack code that needs to initialize a bad dialog so that we can apply,
global settings for dialogs to not show hidden files by default and display
the hidden file toggle.
:return: nothing
"""
try:
self.master.tk.call("tk_getOpenFile", "-foobar")
except tk.TclError:
pass
self.master.tk.call("set", "::tk::dialog::file::showHiddenBtn", "1")
self.master.tk.call("set", "::tk::dialog::file::showHiddenVar", "0")
def center(self) -> None:
screen_width = self.master.winfo_screenwidth() screen_width = self.master.winfo_screenwidth()
screen_height = self.master.winfo_screenheight() screen_height = self.master.winfo_screenheight()
x = int((screen_width / 2) - (WIDTH * self.app_scale / 2)) x = int((screen_width / 2) - (WIDTH * self.app_scale / 2))
@ -108,113 +85,68 @@ class Application(ttk.Frame):
f"{int(WIDTH * self.app_scale)}x{int(HEIGHT * self.app_scale)}+{x}+{y}" f"{int(WIDTH * self.app_scale)}x{int(HEIGHT * self.app_scale)}+{x}+{y}"
) )
def draw(self) -> None: def draw(self):
self.master.rowconfigure(0, weight=1) self.master.rowconfigure(0, weight=1)
self.master.columnconfigure(0, weight=1) self.master.columnconfigure(0, weight=1)
self.rowconfigure(0, weight=1) self.rowconfigure(0, weight=1)
self.columnconfigure(1, weight=1) self.columnconfigure(1, weight=1)
self.grid(sticky=tk.NSEW) self.grid(sticky="nsew")
self.toolbar = Toolbar(self) self.toolbar = Toolbar(self, self)
self.toolbar.grid(sticky=tk.NS) self.toolbar.grid(sticky="ns")
self.right_frame = ttk.Frame(self) self.right_frame = ttk.Frame(self)
self.right_frame.columnconfigure(0, weight=1) self.right_frame.columnconfigure(0, weight=1)
self.right_frame.rowconfigure(0, weight=1) self.right_frame.rowconfigure(0, weight=1)
self.right_frame.grid(row=0, column=1, sticky=tk.NSEW) self.right_frame.grid(row=0, column=1, sticky="nsew")
self.draw_canvas() self.draw_canvas()
self.draw_infobar()
self.draw_status() self.draw_status()
self.progress = Progressbar(self.right_frame, mode="indeterminate") self.progress = Progressbar(self.right_frame, mode="indeterminate")
self.menubar = Menubar(self) self.menubar = Menubar(self.master, self)
self.master.config(menu=self.menubar)
def draw_infobar(self) -> None: def draw_canvas(self):
self.infobar = ttk.Frame(self.right_frame, padding=5, relief=tk.RAISED) width = self.guiconfig["preferences"]["width"]
self.infobar.columnconfigure(0, weight=1) height = self.guiconfig["preferences"]["height"]
self.infobar.rowconfigure(1, weight=1) canvas_frame = ttk.Frame(self.right_frame)
label_font = font.Font(weight=font.BOLD, underline=tk.TRUE) canvas_frame.rowconfigure(0, weight=1)
label = ttk.Label( canvas_frame.columnconfigure(0, weight=1)
self.infobar, text="Details", anchor=tk.CENTER, font=label_font canvas_frame.grid(sticky="nsew", pady=1)
self.canvas = CanvasGraph(canvas_frame, self, self.core, width, height)
self.canvas.grid(sticky="nsew")
scroll_y = ttk.Scrollbar(canvas_frame, command=self.canvas.yview)
scroll_y.grid(row=0, column=1, sticky="ns")
scroll_x = ttk.Scrollbar(
canvas_frame, orient=tk.HORIZONTAL, command=self.canvas.xview
) )
label.grid(sticky=tk.EW, pady=PADY) scroll_x.grid(row=1, column=0, sticky="ew")
self.canvas.configure(xscrollcommand=scroll_x.set)
self.canvas.configure(yscrollcommand=scroll_y.set)
def draw_canvas(self) -> None: def draw_status(self):
self.manager = CanvasManager(self.right_frame, self, self.core)
self.manager.notebook.grid(sticky=tk.NSEW)
def draw_status(self) -> None:
self.statusbar = StatusBar(self.right_frame, self) self.statusbar = StatusBar(self.right_frame, self)
self.statusbar.grid(sticky=tk.EW, columnspan=2) self.statusbar.grid(sticky="ew")
def display_info(self, frame_class: type[InfoFrameBase], **kwargs: Any) -> None: def show_grpc_exception(self, title: str, e: grpc.RpcError) -> None:
if not self.show_infobar.get(): logging.exception("app grpc exception", exc_info=e)
return message = e.details()
self.clear_info() self.show_error(title, message)
self.info_frame = frame_class(self.infobar, **kwargs)
self.info_frame.draw()
self.info_frame.grid(sticky=tk.NSEW)
def clear_info(self) -> None: def show_exception(self, title: str, e: Exception) -> None:
if self.info_frame: logging.exception("app exception", exc_info=e)
self.info_frame.destroy() self.show_error(title, str(e))
self.info_frame = None
def default_info(self) -> None: def show_error(self, title: str, message: str) -> None:
self.clear_info() self.after(0, lambda: ErrorDialog(self, title, message).show())
self.display_info(DefaultInfoFrame, app=self)
def show_info(self) -> None: def on_closing(self):
self.default_info()
self.infobar.grid(row=0, column=1, sticky=tk.NSEW)
def hide_info(self) -> None:
self.infobar.grid_forget()
def show_grpc_exception(
self, message: str, e: grpc.RpcError, blocking: bool = False
) -> None:
logger.exception("app grpc exception", exc_info=e)
dialog = ErrorDialog(self, "GRPC Exception", message, e.details())
if blocking:
dialog.show()
else:
self.after(0, lambda: dialog.show())
def show_exception(self, message: str, e: Exception) -> None:
logger.exception("app exception", exc_info=e)
self.after(
0, lambda: ErrorDialog(self, "App Exception", message, str(e)).show()
)
def show_exception_data(self, title: str, message: str, details: str) -> None:
self.after(0, lambda: ErrorDialog(self, title, message, details).show())
def show_error(self, title: str, message: str, blocking: bool = False) -> None:
if blocking:
messagebox.showerror(title, message, parent=self)
else:
self.after(0, lambda: messagebox.showerror(title, message, parent=self))
def on_closing(self) -> None:
if self.toolbar.picker:
self.toolbar.picker.destroy()
self.menubar.prompt_save_running_session(True) self.menubar.prompt_save_running_session(True)
def save_config(self) -> None: def save_config(self):
appconfig.save(self.guiconfig) appconfig.save(self.guiconfig)
def joined_session_update(self) -> None: def joined_session_update(self):
if self.core.is_runtime(): if self.core.is_runtime():
self.menubar.set_state(is_runtime=True)
self.toolbar.set_runtime() self.toolbar.set_runtime()
else: else:
self.menubar.set_state(is_runtime=False)
self.toolbar.set_design() self.toolbar.set_design()
def get_enum_icon(self, image_enum: ImageEnum, *, width: int) -> PhotoImage: def close(self):
return images.from_enum(image_enum, width=width, scale=self.app_scale)
def get_file_icon(self, file_path: str, *, width: int) -> PhotoImage:
return images.from_file(file_path, width=width, scale=self.app_scale)
def close(self) -> None:
self.master.destroy() self.master.destroy()

View file

@ -1,32 +1,32 @@
import os import os
import shutil import shutil
from pathlib import Path from pathlib import Path
from typing import Optional
import yaml import yaml
# gui home paths
from core.gui import themes from core.gui import themes
HOME_PATH: Path = Path.home().joinpath(".coregui") HOME_PATH = Path.home().joinpath(".coretk")
BACKGROUNDS_PATH: Path = HOME_PATH.joinpath("backgrounds") BACKGROUNDS_PATH = HOME_PATH.joinpath("backgrounds")
CUSTOM_EMANE_PATH: Path = HOME_PATH.joinpath("custom_emane") CUSTOM_EMANE_PATH = HOME_PATH.joinpath("custom_emane")
CUSTOM_SERVICE_PATH: Path = HOME_PATH.joinpath("custom_services") CUSTOM_SERVICE_PATH = HOME_PATH.joinpath("custom_services")
ICONS_PATH: Path = HOME_PATH.joinpath("icons") ICONS_PATH = HOME_PATH.joinpath("icons")
MOBILITY_PATH: Path = HOME_PATH.joinpath("mobility") MOBILITY_PATH = HOME_PATH.joinpath("mobility")
XMLS_PATH: Path = HOME_PATH.joinpath("xmls") XMLS_PATH = HOME_PATH.joinpath("xmls")
CONFIG_PATH: Path = HOME_PATH.joinpath("config.yaml") CONFIG_PATH = HOME_PATH.joinpath("gui.yaml")
LOG_PATH: Path = HOME_PATH.joinpath("gui.log") LOG_PATH = HOME_PATH.joinpath("gui.log")
SCRIPT_PATH: Path = HOME_PATH.joinpath("scripts") SCRIPT_PATH = HOME_PATH.joinpath("scripts")
# local paths # local paths
DATA_PATH: Path = Path(__file__).parent.joinpath("data") DATA_PATH = Path(__file__).parent.joinpath("data")
LOCAL_ICONS_PATH: Path = DATA_PATH.joinpath("icons").absolute() LOCAL_ICONS_PATH = DATA_PATH.joinpath("icons").absolute()
LOCAL_BACKGROUND_PATH: Path = DATA_PATH.joinpath("backgrounds").absolute() LOCAL_BACKGROUND_PATH = DATA_PATH.joinpath("backgrounds").absolute()
LOCAL_XMLS_PATH: Path = DATA_PATH.joinpath("xmls").absolute() LOCAL_XMLS_PATH = DATA_PATH.joinpath("xmls").absolute()
LOCAL_MOBILITY_PATH: Path = DATA_PATH.joinpath("mobility").absolute() LOCAL_MOBILITY_PATH = DATA_PATH.joinpath("mobility").absolute()
# configuration data # configuration data
TERMINALS: dict[str, str] = { TERMINALS = {
"xterm": "xterm -e", "xterm": "xterm -e",
"aterm": "aterm -e", "aterm": "aterm -e",
"eterm": "eterm -e", "eterm": "eterm -e",
@ -36,153 +36,26 @@ TERMINALS: dict[str, str] = {
"xfce4-terminal": "xfce4-terminal -x", "xfce4-terminal": "xfce4-terminal -x",
"gnome-terminal": "gnome-terminal --window --", "gnome-terminal": "gnome-terminal --window --",
} }
EDITORS: list[str] = ["$EDITOR", "vim", "emacs", "gedit", "nano", "vi"] EDITORS = ["$EDITOR", "vim", "emacs", "gedit", "nano", "vi"]
DEFAULT_IP4S = ["10.0.0.0", "192.168.0.0", "172.16.0.0"]
DEFAULT_IP4 = DEFAULT_IP4S[0]
DEFAULT_IP6S = ["2001::", "2002::", "a::"]
DEFAULT_IP6 = DEFAULT_IP6S[0]
DEFAULT_MAC = "00:00:00:aa:00:00"
class IndentDumper(yaml.Dumper): class IndentDumper(yaml.Dumper):
def increase_indent(self, flow: bool = False, indentless: bool = False) -> None: def increase_indent(self, flow=False, indentless=False):
super().increase_indent(flow, False) return super().increase_indent(flow, False)
class CustomNode(yaml.YAMLObject): def copy_files(current_path, new_path):
yaml_tag: str = "!CustomNode"
yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader
def __init__(self, name: str, image: str, services: list[str]) -> None:
self.name: str = name
self.image: str = image
self.services: list[str] = services
class CoreServer(yaml.YAMLObject):
yaml_tag: str = "!CoreServer"
yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader
def __init__(self, name: str, address: str) -> None:
self.name: str = name
self.address: str = address
class Observer(yaml.YAMLObject):
yaml_tag: str = "!Observer"
yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader
def __init__(self, name: str, cmd: str) -> None:
self.name: str = name
self.cmd: str = cmd
class PreferencesConfig(yaml.YAMLObject):
yaml_tag: str = "!PreferencesConfig"
yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader
def __init__(
self,
editor: str = EDITORS[1],
terminal: str = None,
theme: str = themes.THEME_DARK,
gui3d: str = "/usr/local/bin/std3d.sh",
width: int = 1000,
height: int = 750,
) -> None:
self.theme: str = theme
self.editor: str = editor
self.terminal: str = terminal
self.gui3d: str = gui3d
self.width: int = width
self.height: int = height
class LocationConfig(yaml.YAMLObject):
yaml_tag: str = "!LocationConfig"
yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader
def __init__(
self,
x: float = 0.0,
y: float = 0.0,
z: float = 0.0,
lat: float = 47.5791667,
lon: float = -122.132322,
alt: float = 2.0,
scale: float = 150.0,
) -> None:
self.x: float = x
self.y: float = y
self.z: float = z
self.lat: float = lat
self.lon: float = lon
self.alt: float = alt
self.scale: float = scale
class IpConfigs(yaml.YAMLObject):
yaml_tag: str = "!IpConfigs"
yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader
def __init__(self, **kwargs) -> None:
self.__setstate__(kwargs)
def __setstate__(self, kwargs):
self.ip4s: list[str] = kwargs.get(
"ip4s", ["10.0.0.0", "192.168.0.0", "172.16.0.0"]
)
self.ip4: str = kwargs.get("ip4", self.ip4s[0])
self.ip6s: list[str] = kwargs.get("ip6s", ["2001::", "2002::", "a::"])
self.ip6: str = kwargs.get("ip6", self.ip6s[0])
self.enable_ip4: bool = kwargs.get("enable_ip4", True)
self.enable_ip6: bool = kwargs.get("enable_ip6", True)
class GuiConfig(yaml.YAMLObject):
yaml_tag: str = "!GuiConfig"
yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader
def __init__(
self,
preferences: PreferencesConfig = None,
location: LocationConfig = None,
servers: list[CoreServer] = None,
nodes: list[CustomNode] = None,
recentfiles: list[str] = None,
observers: list[Observer] = None,
scale: float = 1.0,
ips: IpConfigs = None,
mac: str = "00:00:00:aa:00:00",
) -> None:
if preferences is None:
preferences = PreferencesConfig()
self.preferences: PreferencesConfig = preferences
if location is None:
location = LocationConfig()
self.location: LocationConfig = location
if servers is None:
servers = []
self.servers: list[CoreServer] = servers
if nodes is None:
nodes = []
self.nodes: list[CustomNode] = nodes
if recentfiles is None:
recentfiles = []
self.recentfiles: list[str] = recentfiles
if observers is None:
observers = []
self.observers: list[Observer] = observers
self.scale: float = scale
if ips is None:
ips = IpConfigs()
self.ips: IpConfigs = ips
self.mac: str = mac
def copy_files(current_path: Path, new_path: Path) -> None:
for current_file in current_path.glob("*"): for current_file in current_path.glob("*"):
new_file = new_path.joinpath(current_file.name) new_file = new_path.joinpath(current_file.name)
if not new_file.exists(): shutil.copy(current_file, new_file)
shutil.copy(current_file, new_file)
def find_terminal() -> Optional[str]: def find_terminal():
for term in sorted(TERMINALS): for term in sorted(TERMINALS):
cmd = TERMINALS[term] cmd = TERMINALS[term]
if shutil.which(term): if shutil.which(term):
@ -190,35 +63,67 @@ def find_terminal() -> Optional[str]:
return None return None
def check_directory() -> None: def check_directory():
HOME_PATH.mkdir(exist_ok=True) if HOME_PATH.exists():
BACKGROUNDS_PATH.mkdir(exist_ok=True) return
CUSTOM_EMANE_PATH.mkdir(exist_ok=True) HOME_PATH.mkdir()
CUSTOM_SERVICE_PATH.mkdir(exist_ok=True) BACKGROUNDS_PATH.mkdir()
ICONS_PATH.mkdir(exist_ok=True) CUSTOM_EMANE_PATH.mkdir()
MOBILITY_PATH.mkdir(exist_ok=True) CUSTOM_SERVICE_PATH.mkdir()
XMLS_PATH.mkdir(exist_ok=True) ICONS_PATH.mkdir()
SCRIPT_PATH.mkdir(exist_ok=True) MOBILITY_PATH.mkdir()
XMLS_PATH.mkdir()
SCRIPT_PATH.mkdir()
copy_files(LOCAL_ICONS_PATH, ICONS_PATH) copy_files(LOCAL_ICONS_PATH, ICONS_PATH)
copy_files(LOCAL_BACKGROUND_PATH, BACKGROUNDS_PATH) copy_files(LOCAL_BACKGROUND_PATH, BACKGROUNDS_PATH)
copy_files(LOCAL_XMLS_PATH, XMLS_PATH) copy_files(LOCAL_XMLS_PATH, XMLS_PATH)
copy_files(LOCAL_MOBILITY_PATH, MOBILITY_PATH) copy_files(LOCAL_MOBILITY_PATH, MOBILITY_PATH)
if not CONFIG_PATH.exists():
terminal = find_terminal() terminal = find_terminal()
if "EDITOR" in os.environ: if "EDITOR" in os.environ:
editor = EDITORS[0] editor = EDITORS[0]
else: else:
editor = EDITORS[1] editor = EDITORS[1]
preferences = PreferencesConfig(editor, terminal) config = {
config = GuiConfig(preferences=preferences) "preferences": {
save(config) "theme": themes.THEME_DARK,
"editor": editor,
"terminal": terminal,
"gui3d": "/usr/local/bin/std3d.sh",
"width": 1000,
"height": 750,
},
"location": {
"x": 0.0,
"y": 0.0,
"z": 0.0,
"lat": 47.5791667,
"lon": -122.132322,
"alt": 2.0,
"scale": 150.0,
},
"servers": [],
"nodes": [],
"recentfiles": [],
"observers": [],
"scale": 1.0,
"ips": {
"ip4": DEFAULT_IP4,
"ip6": DEFAULT_IP6,
"ip4s": DEFAULT_IP4S,
"ip6s": DEFAULT_IP6S,
},
"mac": DEFAULT_MAC,
}
save(config)
def read() -> GuiConfig: def read():
with CONFIG_PATH.open("r") as f: with CONFIG_PATH.open("r") as f:
return yaml.safe_load(f) return yaml.load(f, Loader=yaml.SafeLoader)
def save(config: GuiConfig) -> None: def save(config):
with CONFIG_PATH.open("w") as f: with CONFIG_PATH.open("w") as f:
yaml.dump(config, f, Dumper=IndentDumper, default_flow_style=False) yaml.dump(config, f, Dumper=IndentDumper, default_flow_style=False)

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 385 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Some files were not shown because too many files have changed in this diff Show more