diff --git a/.github/workflows/daemon-checks.yml b/.github/workflows/daemon-checks.yml index dc169dcf..52440467 100644 --- a/.github/workflows/daemon-checks.yml +++ b/.github/workflows/daemon-checks.yml @@ -4,13 +4,13 @@ on: [push] jobs: build: - runs-on: ubuntu-22.04 + runs-on: ubuntu-18.04 steps: - uses: actions/checkout@v1 - - name: Set up Python 3.9 + - name: Set up Python 3.6 uses: actions/setup-python@v1 with: - python-version: 3.9 + python-version: 3.6 - name: install poetry run: | python -m pip install --upgrade pip diff --git a/.github/workflows/documentation.yml b/.github/workflows/documentation.yml deleted file mode 100644 index abbadab3..00000000 --- a/.github/workflows/documentation.yml +++ /dev/null @@ -1,21 +0,0 @@ -name: documentation -on: - push: - branches: - - master -permissions: - contents: write -jobs: - deploy: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v3 - - uses: actions/setup-python@v4 - with: - python-version: 3.x - - uses: actions/cache@v2 - with: - key: ${{ github.ref }} - path: .cache - - run: pip install mkdocs-material - - run: mkdocs gh-deploy --force diff --git a/.gitignore b/.gitignore index ca4c07dd..2012df9d 100644 --- a/.gitignore +++ b/.gitignore @@ -14,13 +14,9 @@ config.h.in config.log config.status configure -configure~ debian stamp-h1 -# python virtual environments -venv - # generated protobuf files *_pb2.py *_pb2_grpc.py @@ -62,6 +58,3 @@ daemon/setup.py # python __pycache__ - -# ignore core player files -*.core diff --git a/CHANGELOG.md b/CHANGELOG.md index 425f2ae0..30b5c711 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,230 +1,3 @@ -## 2023-08-01 CORE 9.0.3 - -* Installation - * updated various dependencies -* Documentation - * improved GUI docs to include node interaction and note xhost usage - * \#780 - fixed gRPC examples - * \#787 - complete documentation revamp to leverage mkdocs material - * \#790 - fixed custom emane model example -* core-daemon - * update type hinting to avoid deprecated imports - * updated commands ran within docker based nodes to have proper environment variables - * fixed issue improperly setting session options over gRPC - * \#668 - add fedora sbin path to frr service - * \#774 - fixed pcap configservice - * \#805 - fixed radvd configservice template error -* core-gui - * update type hinting to avoid deprecated imports - * fixed issue allowing duplicate named hook scripts - * fixed issue joining sessions with RJ45 nodes -* utility scripts - * fixed issue in core-cleanup for removing devices - -## 2023-03-02 CORE 9.0.2 - -* Installation - * updated python dependencies, including invoke to resolve python 3.10+ issues - * improved example dockerfiles to use less space for built images -* Documentation - * updated emane install instructions - * added Docker related issues to install instructions -* core-daemon - * fixed issue using invalid device name in sysctl commands - * updated PTP nodes to properly disable mac learning for their linux bridge - * fixed issue for LXC nodes to properly use a configured image name and write it to XML - * \#742 - fixed issue with bad wlan node id being used - * \#744 - fixed issue not properly setting broadcast address -* core-gui - * fixed sample1.xml to remove SSH service - * fixed emane demo examples - * fixed issue displaying emane configs generally configured for a node - -## 2022-11-28 CORE 9.0.1 - -* Installation - * updated protobuf and grpcio-tools versions in pyproject.toml to account for bad version mix - -## 2022-11-18 CORE 9.0.0 - -* Breaking Changes - * removed session nodes file - * removed session state file - * emane now runs in one process per nem with unique control ports - * grpc client has been refactored and updated - * removed tcl/legacy gui, imn file support and the tlv api - * link configuration is now different, but consistent, for wired links -* Installation - * added packaging for single file distribution - * python3.9 is now the minimum required version - * updated Dockerfile examples - * updated various python dependencies - * virtual environment is now installed to /opt/core/venv -* Documentation - * updated emane invoke task examples - * revamped install documentation - * added wireless node notes -* core-gui - * updated config services to display rendered templated and allow editing - * fixed node icon issue when updating preferences - * \#89 - throughput widget now works for hubs/switches - * \#691 - fixed custom nodes to properly use config services -* gRPC API - * add linked call to support linking and unlinking interfaces without destroying them - * fixed issue during start session clearing out session options - * added call to get rendered config service files - * removed get_node_links from links from client - * nem id and nem port have been added to GetNode and AddLink calls -* core-daemon - * wired links always create two veth pairs joined by a bridge - * node interfaces are now configured within the container to apply to outgoing traffic - * session.add_node now uses NodeOptions, allowing for node specific options - * fixed issue with xml reading node canvas values - * removed Session.add_node_file - * fixed get requirements logic - * fixed docker/lxd node support terminal commands on remote servers - * improved docker node command execution time using nsenter - * new wireless node type added to support dynamic loss based on distance - * \#513 - add and deleting distributed links during runtime is now supported - * \#703 - fixed issue not starting emane event listening service - -## 2022-03-21 CORE 8.2.0 - -* core-gui - * improved failed starts to trigger runtime to allow node investigation -* core-daemon - * improved default service loading to use a full import path - * updated session instantiation to always set to a runtime state -* core-cli - * \#672 - fixed xml loading - * \#578 - restored json flag and added geo output to session overview -* Documentation - * updated emane example and documentation - * improved table markdown - -## 2022-02-18 CORE 8.1.0 - -* Installation - * updated dependency versions to account for known vulnerabilities -* GUI - * fixed issue drawing asymmetric link configurations when joining a session -* daemon - * fixed issue getting templates and creating files for config services - * added by directional support for network to network links - * \#647 - fixed issue when creating RJ45 nodes - * \#646 - fixed issue when creating files for Docker nodes - * \#645 - improved wlan change updates to account for all updates with no delay -* services - * fixed file generation for OSPFv2 config service - -## 2022-01-12 CORE 8.0.0 - -*Breaking Changes - * heavily refactored gRPC client, removing some calls, adding others, all using type hinted classes representing their protobuf counterparts - * emane adjustments to run each nem in its own process, includes adjustments to configuration, which may cause issues - * internal daemon cleanup and refactoring, in a script directly driving a scenario is used -* Installation - * added options to allow installation without ospf mdr - * removed tasks that are no longer needed - * updates to properly install/remove example files - * pipx/poetry/invoke versions are now locked to help avoid update related issues - * install.sh is now setup.sh and is a convenience to get tool setup to run invoke -* Documentation - * formally added notes for Docker and LXD based node types - * added config services - * Updated README to have quick notes for installation - * \#563 - update to note how to enable core service -* Examples - * \#598 - update to fix sample1.imn to working order -* core-daemon - * emane global configuration is now configurable per nem - * fixed wlan loss to support float values - * improved default service loading to use full core path - * improved emane model loading to occur one time - * fixed handling rj45 link edits from tlv api - * fixed wlan config getting a default value for the promiscuous setting when not provided - * ebtables usage has now been replaced with nftables - * \#564 - logging is now using module named loggers - * \#573 - emane processes are not created 1 to 1 with nems - * \#608 - update lxml version - * \#609 - update pyyaml version - * \#623 - fixed issue with ovs mode and mac learning -* core-gui - * config services are now the default service type - * legacy services are marked as deprecated - * fix to properly load session options - * logging is now using module named loggers - * save as will not update the current session file name as expected - * fix to properly clear out removed customized services - * adding directories to a service that do not exist, is now valid - * added flag to exit after creating gui directory from command line - * added new options to enable/disable ip4/ip6 assignment - * improved canvas draw order, when joining sessions - * improved node copy/paste to avoid issues when pasting text into service config dialogs - * each canvas will not correctly save and load their size from xml -* gRPC API - * session options are now returned for GetSession - * fixed issue not properly creating the session directory during start session definition state - * updates to separate editing a node and moving a node, new MoveNode call added, EditNode is now used for editing icons -* Services - * fixed default route config service - * config services now have options for shadowing directories, including per node customization - -## 2021-09-17 CORE 7.5.2 - -* Installation - * \#596 - fixes issue related to installing poetry by pinning version to 1.1.7 - * updates pipx installation to pinned version 0.16.4 -* core-daemon - * \#600 - fixes known vulnerability for pillow dependency by updating version - -## 2021-04-15 CORE 7.5.1 - -* core-pygui - * fixed issues creating and drawing custom nodes - -## 2021-03-11 CORE 7.5.0 - -* core-daemon - * fixed issue setting mobility loop value properly - * fixed issue that some states would not properly remove session directories - * \#560 - fixed issues with sdt integration for mobility movement and layer creation -* core-pygui - * added multiple canvas support - * added support to hide nodes and restore them visually - * update to assign full netmasks to wireless connected nodes by default - * update to display services and action controls for nodes during runtime - * fixed issues with custom nodes - * fixed issue auto assigning macs, avoiding duplication - * fixed issue joining session with different netmasks - * fixed issues when deleting a session from the sessions dialog - * \#550 - fixed issue not sending all service customization data -* core-cli - * added delete session command - -## 2021-01-11 CORE 7.4.0 - -* Installation - * fixed issue for automated install assuming ID_LIKE is always present in /etc/os-release -* gRPC API - * fixed issue stopping session and not properly going to data collect state - * fixed issue to have start session properly create a directory before configuration state -* core-pygui - * fixed issue handling deletion of wired link to a switch - * avoid saving edge metadata to xml when values are default - * fixed issue editing node mac addresses - * added support for configuring interface names - * fixed issue with potential node names to allow hyphens and remove under bars - * \#531 - fixed issue changing distributed nodes back to local -* core-daemon - * fixed issue to properly handle deleting links from a network to network node - * updated xml to support writing and reading link buffer configurations - * reverted change and removed mac learning from wlan, due to promiscuous like behavior - * fixed issue creating control interfaces when starting services - * fixed deadlock issue when clearing a session using sdt - * \#116 - fixed issue for wlans handling multiple mobility scripts at once - * \#539 - fixed issue in udp tlv api - ## 2020-12-02 CORE 7.3.0 * core-daemon diff --git a/Dockerfile b/Dockerfile deleted file mode 100644 index 155cacc0..00000000 --- a/Dockerfile +++ /dev/null @@ -1,126 +0,0 @@ -# syntax=docker/dockerfile:1 -FROM ubuntu:22.04 -LABEL Description="CORE Docker Ubuntu Image" - -ARG PREFIX=/usr/local -ARG BRANCH=master -ARG PROTOC_VERSION=3.19.6 -ARG VENV_PATH=/opt/core/venv -ENV DEBIAN_FRONTEND=noninteractive -ENV PATH="$PATH:${VENV_PATH}/bin" -WORKDIR /opt - -# install system dependencies - -RUN apt-get update -y && \ - apt-get install -y software-properties-common - -RUN add-apt-repository "deb http://archive.ubuntu.com/ubuntu jammy universe" - -RUN apt-get update -y && \ - apt-get install -y --no-install-recommends \ - automake \ - bash \ - ca-certificates \ - ethtool \ - gawk \ - gcc \ - g++ \ - iproute2 \ - iputils-ping \ - libc-dev \ - libev-dev \ - libreadline-dev \ - libtool \ - nftables \ - python3 \ - python3-pip \ - python3-tk \ - pkg-config \ - tk \ - xauth \ - xterm \ - wireshark \ - vim \ - build-essential \ - nano \ - firefox \ - net-tools \ - rsync \ - openssh-server \ - openssh-client \ - vsftpd \ - atftpd \ - atftp \ - mini-httpd \ - lynx \ - tcpdump \ - iperf \ - iperf3 \ - tshark \ - openssh-sftp-server \ - bind9 \ - bind9-utils \ - openvpn \ - isc-dhcp-server \ - isc-dhcp-client \ - whois \ - ipcalc \ - socat \ - hping3 \ - libgtk-3-0 \ - librest-0.7-0 \ - libgtk-3-common \ - dconf-gsettings-backend \ - libsoup-gnome2.4-1 \ - libsoup2.4-1 \ - dconf-service \ - x11-xserver-utils \ - ftp \ - git \ - sudo \ - wget \ - tzdata \ - libpcap-dev \ - libpcre3-dev \ - libprotobuf-dev \ - libxml2-dev \ - protobuf-compiler \ - unzip \ - uuid-dev \ - iproute2 \ - vlc \ - iputils-ping && \ - apt-get autoremove -y - -# install core -RUN git clone https://github.com/coreemu/core && \ - cd core && \ - git checkout ${BRANCH} && \ - ./setup.sh && \ - PATH=/root/.local/bin:$PATH inv install -v -p ${PREFIX} && \ - cd /opt && \ - rm -rf ospf-mdr - -# install emane -RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-linux-x86_64.zip && \ - mkdir protoc && \ - unzip protoc-${PROTOC_VERSION}-linux-x86_64.zip -d protoc && \ - git clone https://github.com/adjacentlink/emane.git && \ - cd emane && \ - ./autogen.sh && \ - ./configure --prefix=/usr && \ - make -j$(nproc) && \ - make install && \ - cd src/python && \ - make clean && \ - PATH=/opt/protoc/bin:$PATH make && \ - ${VENV_PATH}/bin/python -m pip install . && \ - cd /opt && \ - rm -rf protoc && \ - rm -rf emane && \ - rm -f protoc-${PROTOC_VERSION}-linux-x86_64.zip - -WORKDIR /root - -CMD /opt/core/venv/bin/core-daemon diff --git a/Makefile.am b/Makefile.am index 2b5f29e2..7a3799fc 100644 --- a/Makefile.am +++ b/Makefile.am @@ -6,6 +6,10 @@ if WANT_DOCS DOCS = docs man endif +if WANT_GUI + GUI = gui +endif + if WANT_DAEMON DAEMON = daemon endif @@ -15,13 +19,12 @@ if WANT_NETNS endif # keep docs last due to dependencies on binaries -SUBDIRS = $(DAEMON) $(NETNS) $(DOCS) +SUBDIRS = $(GUI) $(DAEMON) $(NETNS) $(DOCS) ACLOCAL_AMFLAGS = -I config # extra files to include with distribution tarball EXTRA_DIST = bootstrap.sh \ - package \ LICENSE \ README.md \ ASSIGNMENT_OF_COPYRIGHT.pdf \ @@ -48,19 +51,18 @@ fpm -s dir -t deb -n core-distributed \ --description "Common Open Research Emulator Distributed Package" \ --url https://github.com/coreemu/core \ --vendor "$(PACKAGE_VENDOR)" \ - -p core-distributed_VERSION_ARCH.deb \ + -p core_distributed_VERSION_ARCH.deb \ -v $(PACKAGE_VERSION) \ -d "ethtool" \ -d "procps" \ -d "libc6 >= 2.14" \ -d "bash >= 3.0" \ - -d "nftables" \ + -d "ebtables" \ -d "iproute2" \ -d "libev4" \ -d "openssh-server" \ -d "xterm" \ - netns/vnoded=/usr/bin/ \ - netns/vcmd=/usr/bin/ + -C $(DESTDIR) endef define fpm-distributed-rpm = @@ -70,86 +72,23 @@ fpm -s dir -t rpm -n core-distributed \ --description "Common Open Research Emulator Distributed Package" \ --url https://github.com/coreemu/core \ --vendor "$(PACKAGE_VENDOR)" \ - -p core-distributed_VERSION_ARCH.rpm \ + -p core_distributed_VERSION_ARCH.rpm \ -v $(PACKAGE_VERSION) \ -d "ethtool" \ -d "procps-ng" \ -d "bash >= 3.0" \ - -d "nftables" \ + -d "ebtables" \ -d "iproute" \ -d "libev" \ -d "net-tools" \ -d "openssh-server" \ -d "xterm" \ - netns/vnoded=/usr/bin/ \ - netns/vcmd=/usr/bin/ + -C $(DESTDIR) endef -define fpm-rpm = -fpm -s dir -t rpm -n core \ - -m "$(PACKAGE_MAINTAINERS)" \ - --license "BSD" \ - --description "core vnoded/vcmd and system dependencies" \ - --url https://github.com/coreemu/core \ - --vendor "$(PACKAGE_VENDOR)" \ - -p core_VERSION_ARCH.rpm \ - -v $(PACKAGE_VERSION) \ - --rpm-init package/core-daemon \ - --after-install package/after-install.sh \ - --after-remove package/after-remove.sh \ - -d "ethtool" \ - -d "tk" \ - -d "procps-ng" \ - -d "bash >= 3.0" \ - -d "ebtables" \ - -d "iproute" \ - -d "libev" \ - -d "net-tools" \ - -d "nftables" \ - netns/vnoded=/usr/bin/ \ - netns/vcmd=/usr/bin/ \ - package/etc/core.conf=/etc/core/ \ - package/etc/logging.conf=/etc/core/ \ - package/examples=/opt/core/ \ - daemon/dist/core-$(PACKAGE_VERSION)-py3-none-any.whl=/opt/core/ -endef - -define fpm-deb = -fpm -s dir -t deb -n core \ - -m "$(PACKAGE_MAINTAINERS)" \ - --license "BSD" \ - --description "core vnoded/vcmd and system dependencies" \ - --url https://github.com/coreemu/core \ - --vendor "$(PACKAGE_VENDOR)" \ - -p core_VERSION_ARCH.deb \ - -v $(PACKAGE_VERSION) \ - --deb-systemd package/core-daemon.service \ - --deb-no-default-config-files \ - --after-install package/after-install.sh \ - --after-remove package/after-remove.sh \ - -d "ethtool" \ - -d "tk" \ - -d "libtk-img" \ - -d "procps" \ - -d "libc6 >= 2.14" \ - -d "bash >= 3.0" \ - -d "ebtables" \ - -d "iproute2" \ - -d "libev4" \ - -d "nftables" \ - netns/vnoded=/usr/bin/ \ - netns/vcmd=/usr/bin/ \ - package/etc/core.conf=/etc/core/ \ - package/etc/logging.conf=/etc/core/ \ - package/examples=/opt/core/ \ - daemon/dist/core-$(PACKAGE_VERSION)-py3-none-any.whl=/opt/core/ -endef - -.PHONY: fpm -fpm: clean-local-fpm - cd daemon && poetry build -f wheel - $(call fpm-deb) - $(call fpm-rpm) +.PHONY: fpm-distributed +fpm-distributed: clean-local-fpm + $(MAKE) -C netns install DESTDIR=$(DESTDIR) $(call fpm-distributed-deb) $(call fpm-distributed-rpm) @@ -176,6 +115,7 @@ $(info creating file $1 from $1.in) -e 's,[@]CORE_STATE_DIR[@],$(CORE_STATE_DIR),g' \ -e 's,[@]CORE_DATA_DIR[@],$(CORE_DATA_DIR),g' \ -e 's,[@]CORE_CONF_DIR[@],$(CORE_CONF_DIR),g' \ + -e 's,[@]CORE_GUI_CONF_DIR[@],$(CORE_GUI_CONF_DIR),g' \ < $1.in > $1 endef @@ -183,6 +123,7 @@ all: change-files .PHONY: change-files change-files: + $(call change-files,gui/core-gui) $(call change-files,daemon/core/constants.py) $(call change-files,netns/setup.py) diff --git a/README.md b/README.md index efab2e70..62f21628 100644 --- a/README.md +++ b/README.md @@ -1,107 +1,24 @@ -# Index -- CORE -- Docker Setup - - Precompiled container image - - Build container image from source - - Adding extra packages - -- Useful commands -- License - # CORE CORE: Common Open Research Emulator -Copyright (c)2005-2022 the Boeing Company. +Copyright (c)2005-2020 the Boeing Company. See the LICENSE file included in this distribution. -# Docker Setup +## About -Here you have 2 choices +The Common Open Research Emulator (CORE) is a tool for emulating +networks on one or more machines. You can connect these emulated +networks to live networks. CORE consists of a GUI for drawing +topologies of lightweight virtual machines, and Python modules for +scripting network emulation. -## Precompiled container image +## Documentation & Support -```bash +We are leveraging GitHub hosted documentation and Discord for persistent +chat rooms. This allows for more dynamic conversations and the +capability to respond faster. Feel free to join us at the link below. -# Start container -sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged --restart unless-stopped git.olympuslab.net/afonso/core-extra:latest - -``` -## Build container image from source - -```bash -# Clone the repo -git clone https://gitea.olympuslab.net/afonso/core-extra.git - -# cd into the directory -cd core-extra - -# build the docker image -sudo docker build -t core-extra . - -# start container -sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged --restart unless-stopped core-extra - -``` - -### Adding extra packages - -To add extra packages you must modify the Dockerfile and then compile the docker image. -If you install it after starting the container it will, by docker nature, be reverted on the next boot of the container. - -# Useful commands - -I have the following functions on my fish shell -to help me better use core - -THIS ONLY WORKS ON FISH, MODIFY FOR BASH OR ZSH - -```fish - -# RUN CORE GUI -function core - xhost +local:root - sudo docker exec -it core core-gui -end - -# RUN BASH INSIDE THE CONTAINER -function core-bash - sudo docker exec -it core /bin/bash -end - - -# LAUNCH NODE BASH ON THE HOST MACHINE -function launch-term --argument nodename - sudo docker exec -it core xterm -bg black -fg white -fa 'DejaVu Sans Mono' -fs 16 -e vcmd -c /tmp/pycore.1/$nodename -- /bin/bash -end - -#TO RUN ANY OTHER COMMAND -sudo docker exec -it core COMAND_GOES_HERE - -``` - -## LICENSE - -Copyright (c) 2005-2018, the Boeing Company. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. -2. Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF -THE POSSIBILITY OF SUCH DAMAGE. +* [Documentation](https://coreemu.github.io/core/) +* [Discord Channel](https://discord.gg/AKd7kmP) diff --git a/bootstrap.sh b/bootstrap.sh index 25fdecfd..ab3d741c 100755 --- a/bootstrap.sh +++ b/bootstrap.sh @@ -1,5 +1,9 @@ #!/bin/sh # +# (c)2010-2012 the Boeing Company +# +# author: Jeff Ahrenholz +# # Bootstrap the autoconf system. # diff --git a/configure.ac b/configure.ac index 4e56507a..7b91b304 100644 --- a/configure.ac +++ b/configure.ac @@ -2,7 +2,7 @@ # Process this file with autoconf to produce a configure script. # this defines the CORE version number, must be static for AC_INIT -AC_INIT(core, 9.0.3) +AC_INIT(core, 7.3.0) # autoconf and automake initialization AC_CONFIG_SRCDIR([netns/version.h.in]) @@ -30,14 +30,25 @@ AC_SUBST(CORE_CONF_DIR) AC_SUBST(CORE_DATA_DIR) AC_SUBST(CORE_STATE_DIR) -# documentation option +# CORE GUI configuration files and preferences in CORE_GUI_CONF_DIR +# scenario files in ~/.core/configs/ +AC_ARG_WITH([guiconfdir], + [AS_HELP_STRING([--with-guiconfdir=dir], + [specify GUI configuration directory])], + [CORE_GUI_CONF_DIR="$with_guiconfdir"], + [CORE_GUI_CONF_DIR="\$\${HOME}/.core"]) +AC_SUBST(CORE_GUI_CONF_DIR) +AC_ARG_ENABLE([gui], + [AS_HELP_STRING([--enable-gui[=ARG]], + [build and install the GUI (default is yes)])], + [], [enable_gui=yes]) +AC_SUBST(enable_gui) AC_ARG_ENABLE([docs], [AS_HELP_STRING([--enable-docs[=ARG]], [build python documentation (default is no)])], [], [enable_docs=no]) AC_SUBST(enable_docs) -# python option AC_ARG_ENABLE([python], [AS_HELP_STRING([--enable-python[=ARG]], [build and install the python bindings (default is yes)])], @@ -83,7 +94,28 @@ if test "x$enable_daemon" = "xyes"; then want_python=yes want_linux_netns=yes - AM_PATH_PYTHON(3.9) + # Checks for libraries. + AC_CHECK_LIB([netgraph], [NgMkSockNode]) + + # Checks for header files. + AC_CHECK_HEADERS([arpa/inet.h fcntl.h limits.h stdint.h stdlib.h string.h sys/ioctl.h sys/mount.h sys/socket.h sys/time.h termios.h unistd.h]) + + # Checks for typedefs, structures, and compiler characteristics. + AC_C_INLINE + AC_TYPE_INT32_T + AC_TYPE_PID_T + AC_TYPE_SIZE_T + AC_TYPE_SSIZE_T + AC_TYPE_UINT32_T + AC_TYPE_UINT8_T + + # Checks for library functions. + AC_FUNC_FORK + AC_FUNC_MALLOC + AC_FUNC_REALLOC + AC_CHECK_FUNCS([atexit dup2 gettimeofday memset socket strerror uname]) + + AM_PATH_PYTHON(3.6) AS_IF([$PYTHON -m grpc_tools.protoc -h &> /dev/null], [], [AC_MSG_ERROR([please install python grpcio-tools])]) AC_CHECK_PROG(sysctl_path, sysctl, $as_dir, no, $SEARCHPATH) @@ -91,9 +123,9 @@ if test "x$enable_daemon" = "xyes"; then AC_MSG_ERROR([Could not locate sysctl (from procps package).]) fi - AC_CHECK_PROG(nftables_path, nft, $as_dir, no, $SEARCHPATH) - if test "x$nftables_path" = "xno" ; then - AC_MSG_ERROR([Could not locate nftables (from nftables package).]) + AC_CHECK_PROG(ebtables_path, ebtables, $as_dir, no, $SEARCHPATH) + if test "x$ebtables_path" = "xno" ; then + AC_MSG_ERROR([Could not locate ebtables (from ebtables package).]) fi AC_CHECK_PROG(ip_path, ip, $as_dir, no, $SEARCHPATH) @@ -139,25 +171,6 @@ fi if [ test "x$enable_daemon" = "xyes" || test "x$enable_vnodedonly" = "xyes" ] ; then want_linux_netns=yes - - # Checks for header files. - AC_CHECK_HEADERS([arpa/inet.h fcntl.h limits.h stdint.h stdlib.h string.h sys/ioctl.h sys/mount.h sys/socket.h sys/time.h termios.h unistd.h]) - - # Checks for typedefs, structures, and compiler characteristics. - AC_C_INLINE - AC_TYPE_INT32_T - AC_TYPE_PID_T - AC_TYPE_SIZE_T - AC_TYPE_SSIZE_T - AC_TYPE_UINT32_T - AC_TYPE_UINT8_T - - # Checks for library functions. - AC_FUNC_FORK - AC_FUNC_MALLOC - AC_FUNC_REALLOC - AC_CHECK_FUNCS([atexit dup2 gettimeofday memset socket strerror uname]) - PKG_CHECK_MODULES(libev, libev, AC_MSG_RESULT([found libev using pkgconfig OK]) AC_SUBST(libev_CFLAGS) @@ -196,6 +209,7 @@ if [test "x$want_python" = "xyes" && test "x$enable_docs" = "xyes"] ; then fi # Variable substitutions +AM_CONDITIONAL(WANT_GUI, test x$enable_gui = xyes) AM_CONDITIONAL(WANT_DAEMON, test x$enable_daemon = xyes) AM_CONDITIONAL(WANT_DOCS, test x$want_docs = xyes) AM_CONDITIONAL(WANT_PYTHON, test x$want_python = xyes) @@ -210,6 +224,9 @@ fi # Output files AC_CONFIG_FILES([Makefile + gui/version.tcl + gui/Makefile + gui/icons/Makefile man/Makefile docs/Makefile daemon/Makefile @@ -231,12 +248,17 @@ Build: Prefix: ${prefix} Exec Prefix: ${exec_prefix} +GUI: + GUI path: ${CORE_LIB_DIR} + GUI config: ${CORE_GUI_CONF_DIR} + Daemon: Daemon path: ${bindir} Daemon config: ${CORE_CONF_DIR} Python: ${PYTHON} Features to build: + Build GUI: ${enable_gui} Build Daemon: ${enable_daemon} Documentation: ${want_docs} diff --git a/daemon/Makefile.am b/daemon/Makefile.am index 2585ea1a..7528dc01 100644 --- a/daemon/Makefile.am +++ b/daemon/Makefile.am @@ -1,4 +1,8 @@ # CORE +# (c)2010-2012 the Boeing Company. +# See the LICENSE file included in this distribution. +# +# author: Jeff Ahrenholz # # Makefile for building netns components. # @@ -21,7 +25,10 @@ DISTCLEANFILES = Makefile.in # files to include with distribution tarball EXTRA_DIST = core \ + data \ doc/conf.py.in \ + examples \ + scripts \ tests \ setup.cfg \ poetry.lock \ diff --git a/daemon/core/__init__.py b/daemon/core/__init__.py index c847c8dc..40ca3604 100644 --- a/daemon/core/__init__.py +++ b/daemon/core/__init__.py @@ -2,3 +2,6 @@ import logging.config # setup default null handler logging.getLogger(__name__).addHandler(logging.NullHandler()) + +# disable paramiko logging +logging.getLogger("paramiko").setLevel(logging.WARNING) diff --git a/daemon/core/api/grpc/client.py b/daemon/core/api/grpc/client.py index 2a5a1d44..e28233fc 100644 --- a/daemon/core/api/grpc/client.py +++ b/daemon/core/api/grpc/client.py @@ -4,118 +4,95 @@ gRpc client for interfacing with CORE. import logging import threading -from collections.abc import Callable, Generator, Iterable from contextlib import contextmanager -from pathlib import Path -from queue import Queue -from typing import Any, Optional +from typing import Any, Callable, Dict, Generator, Iterable, List, Optional import grpc -from core.api.grpc import core_pb2, core_pb2_grpc, emane_pb2, wrappers +from core.api.grpc import configservices_pb2, core_pb2, core_pb2_grpc from core.api.grpc.configservices_pb2 import ( GetConfigServiceDefaultsRequest, - GetConfigServiceRenderedRequest, + GetConfigServiceDefaultsResponse, + GetConfigServicesRequest, + GetConfigServicesResponse, + GetNodeConfigServiceConfigsRequest, + GetNodeConfigServiceConfigsResponse, GetNodeConfigServiceRequest, + GetNodeConfigServiceResponse, + GetNodeConfigServicesRequest, + GetNodeConfigServicesResponse, + SetNodeConfigServiceRequest, + SetNodeConfigServiceResponse, ) -from core.api.grpc.core_pb2 import ( - ExecuteScriptRequest, - GetConfigRequest, - GetWirelessConfigRequest, - LinkedRequest, - WirelessConfigRequest, - WirelessLinkedRequest, -) +from core.api.grpc.core_pb2 import ExecuteScriptRequest, ExecuteScriptResponse from core.api.grpc.emane_pb2 import ( EmaneLinkRequest, + EmaneLinkResponse, + EmaneModelConfig, + EmanePathlossesRequest, + EmanePathlossesResponse, + GetEmaneConfigRequest, + GetEmaneConfigResponse, GetEmaneEventChannelRequest, + GetEmaneEventChannelResponse, GetEmaneModelConfigRequest, + GetEmaneModelConfigResponse, + GetEmaneModelConfigsRequest, + GetEmaneModelConfigsResponse, + GetEmaneModelsRequest, + GetEmaneModelsResponse, + SetEmaneConfigRequest, + SetEmaneConfigResponse, SetEmaneModelConfigRequest, + SetEmaneModelConfigResponse, ) from core.api.grpc.mobility_pb2 import ( GetMobilityConfigRequest, + GetMobilityConfigResponse, + GetMobilityConfigsRequest, + GetMobilityConfigsResponse, MobilityActionRequest, + MobilityActionResponse, MobilityConfig, SetMobilityConfigRequest, + SetMobilityConfigResponse, ) from core.api.grpc.services_pb2 import ( + GetNodeServiceConfigsRequest, + GetNodeServiceConfigsResponse, GetNodeServiceFileRequest, + GetNodeServiceFileResponse, GetNodeServiceRequest, + GetNodeServiceResponse, GetServiceDefaultsRequest, + GetServiceDefaultsResponse, + GetServicesRequest, + GetServicesResponse, + ServiceAction, ServiceActionRequest, + ServiceActionResponse, + ServiceConfig, ServiceDefaults, + ServiceFileConfig, + SetNodeServiceFileRequest, + SetNodeServiceFileResponse, + SetNodeServiceRequest, + SetNodeServiceResponse, SetServiceDefaultsRequest, + SetServiceDefaultsResponse, ) from core.api.grpc.wlan_pb2 import ( GetWlanConfigRequest, + GetWlanConfigResponse, + GetWlanConfigsRequest, + GetWlanConfigsResponse, SetWlanConfigRequest, + SetWlanConfigResponse, WlanConfig, WlanLinkRequest, + WlanLinkResponse, ) -from core.api.grpc.wrappers import LinkOptions from core.emulator.data import IpPrefixes -from core.errors import CoreError -from core.utils import SetQueue - -logger = logging.getLogger(__name__) - - -class MoveNodesStreamer: - def __init__(self, session_id: int, source: str = None) -> None: - self.session_id: int = session_id - self.source: Optional[str] = source - self.queue: SetQueue = SetQueue() - - def send_position(self, node_id: int, x: float, y: float) -> None: - position = wrappers.Position(x=x, y=y) - request = wrappers.MoveNodesRequest( - session_id=self.session_id, - node_id=node_id, - source=self.source, - position=position, - ) - self.send(request) - - def send_geo(self, node_id: int, lon: float, lat: float, alt: float) -> None: - geo = wrappers.Geo(lon=lon, lat=lat, alt=alt) - request = wrappers.MoveNodesRequest( - session_id=self.session_id, node_id=node_id, source=self.source, geo=geo - ) - self.send(request) - - def send(self, request: wrappers.MoveNodesRequest) -> None: - self.queue.put(request) - - def stop(self) -> None: - self.queue.put(None) - - def next(self) -> Optional[core_pb2.MoveNodesRequest]: - request: Optional[wrappers.MoveNodesRequest] = self.queue.get() - if request: - return request.to_proto() - else: - return request - - def iter(self) -> Iterable: - return iter(self.next, None) - - -class EmanePathlossesStreamer: - def __init__(self) -> None: - self.queue: Queue = Queue() - - def send(self, request: Optional[wrappers.EmanePathlossesRequest]) -> None: - self.queue.put(request) - - def next(self) -> Optional[emane_pb2.EmanePathlossesRequest]: - request: Optional[wrappers.EmanePathlossesRequest] = self.queue.get() - if request: - return request.to_proto() - else: - return request - - def iter(self): - return iter(self.next, None) class InterfaceHelper: @@ -135,7 +112,7 @@ class InterfaceHelper: def create_iface( self, node_id: int, iface_id: int, name: str = None, mac: str = None - ) -> wrappers.Interface: + ) -> core_pb2.Interface: """ Create an interface protobuf object. @@ -146,7 +123,7 @@ class InterfaceHelper: :return: interface protobuf """ iface_data = self.prefixes.gen_iface(node_id, name, mac) - return wrappers.Interface( + return core_pb2.Interface( id=iface_id, name=iface_data.name, ip4=iface_data.ip4, @@ -157,65 +134,36 @@ class InterfaceHelper: ) -def throughput_listener( - stream: Any, handler: Callable[[wrappers.ThroughputsEvent], None] -) -> None: +def stream_listener(stream: Any, handler: Callable[[core_pb2.Event], None]) -> None: """ - Listen for throughput events and provide them to the handler. + Listen for stream events and provide them to the handler. :param stream: grpc stream that will provide events :param handler: function that handles an event :return: nothing """ try: - for event_proto in stream: - event = wrappers.ThroughputsEvent.from_proto(event_proto) + for event in stream: handler(event) except grpc.RpcError as e: if e.code() == grpc.StatusCode.CANCELLED: - logger.debug("throughput stream closed") + logging.debug("stream closed") else: - logger.exception("throughput stream error") + logging.exception("stream error") -def cpu_listener( - stream: Any, handler: Callable[[wrappers.CpuUsageEvent], None] -) -> None: +def start_streamer(stream: Any, handler: Callable[[core_pb2.Event], None]) -> None: """ - Listen for cpu events and provide them to the handler. + Convenience method for starting a grpc stream thread for handling streamed events. :param stream: grpc stream that will provide events :param handler: function that handles an event :return: nothing """ - try: - for event_proto in stream: - event = wrappers.CpuUsageEvent.from_proto(event_proto) - handler(event) - except grpc.RpcError as e: - if e.code() == grpc.StatusCode.CANCELLED: - logger.debug("cpu stream closed") - else: - logger.exception("cpu stream error") - - -def event_listener(stream: Any, handler: Callable[[wrappers.Event], None]) -> None: - """ - Listen for session events and provide them to the handler. - - :param stream: grpc stream that will provide events - :param handler: function that handles an event - :return: nothing - """ - try: - for event_proto in stream: - event = wrappers.Event.from_proto(event_proto) - handler(event) - except grpc.RpcError as e: - if e.code() == grpc.StatusCode.CANCELLED: - logger.debug("session stream closed") - else: - logger.exception("session stream error") + thread = threading.Thread( + target=stream_listener, args=(stream, handler), daemon=True + ) + thread.start() class CoreGrpcClient: @@ -235,127 +183,290 @@ class CoreGrpcClient: self.proxy: bool = proxy def start_session( - self, session: wrappers.Session, definition: bool = False - ) -> tuple[bool, list[str]]: + self, + session_id: int, + nodes: List[core_pb2.Node], + links: List[core_pb2.Link], + location: core_pb2.SessionLocation = None, + hooks: List[core_pb2.Hook] = None, + emane_config: Dict[str, str] = None, + emane_model_configs: List[EmaneModelConfig] = None, + wlan_configs: List[WlanConfig] = None, + mobility_configs: List[MobilityConfig] = None, + service_configs: List[ServiceConfig] = None, + service_file_configs: List[ServiceFileConfig] = None, + asymmetric_links: List[core_pb2.Link] = None, + config_service_configs: List[configservices_pb2.ConfigServiceConfig] = None, + ) -> core_pb2.StartSessionResponse: """ Start a session. - :param session: session to start - :param definition: True to only define session data, False to start session - :return: tuple of result and exception strings + :param session_id: id of session + :param nodes: list of nodes to create + :param links: list of links to create + :param location: location to set + :param hooks: session hooks to set + :param emane_config: emane configuration to set + :param emane_model_configs: node emane model configurations + :param wlan_configs: node wlan configurations + :param mobility_configs: node mobility configurations + :param service_configs: node service configurations + :param service_file_configs: node service file configurations + :param asymmetric_links: asymmetric links to edit + :param config_service_configs: config service configurations + :return: start session response """ request = core_pb2.StartSessionRequest( - session=session.to_proto(), definition=definition + session_id=session_id, + nodes=nodes, + links=links, + location=location, + hooks=hooks, + emane_config=emane_config, + emane_model_configs=emane_model_configs, + wlan_configs=wlan_configs, + mobility_configs=mobility_configs, + service_configs=service_configs, + service_file_configs=service_file_configs, + asymmetric_links=asymmetric_links, + config_service_configs=config_service_configs, ) - response = self.stub.StartSession(request) - return response.result, list(response.exceptions) + return self.stub.StartSession(request) - def stop_session(self, session_id: int) -> bool: + def stop_session(self, session_id: int) -> core_pb2.StopSessionResponse: """ Stop a running session. :param session_id: id of session - :return: True for success, False otherwise + :return: stop session response :raises grpc.RpcError: when session doesn't exist """ request = core_pb2.StopSessionRequest(session_id=session_id) - response = self.stub.StopSession(request) - return response.result + return self.stub.StopSession(request) - def create_session(self, session_id: int = None) -> wrappers.Session: + def create_session(self, session_id: int = None) -> core_pb2.CreateSessionResponse: """ Create a session. :param session_id: id for session, default is None and one will be created for you - :return: session id + :return: response with created session id """ request = core_pb2.CreateSessionRequest(session_id=session_id) - response = self.stub.CreateSession(request) - return wrappers.Session.from_proto(response.session) + return self.stub.CreateSession(request) - def delete_session(self, session_id: int) -> bool: + def delete_session(self, session_id: int) -> core_pb2.DeleteSessionResponse: """ Delete a session. :param session_id: id of session - :return: True for success, False otherwise + :return: response with result of deletion success or failure :raises grpc.RpcError: when session doesn't exist """ request = core_pb2.DeleteSessionRequest(session_id=session_id) - response = self.stub.DeleteSession(request) - return response.result + return self.stub.DeleteSession(request) - def get_sessions(self) -> list[wrappers.SessionSummary]: + def get_sessions(self) -> core_pb2.GetSessionsResponse: """ Retrieves all currently known sessions. :return: response with a list of currently known session, their state and number of nodes """ - response = self.stub.GetSessions(core_pb2.GetSessionsRequest()) - sessions = [] - for session_proto in response.sessions: - session = wrappers.SessionSummary.from_proto(session_proto) - sessions.append(session) - return sessions + return self.stub.GetSessions(core_pb2.GetSessionsRequest()) - def check_session(self, session_id: int) -> bool: + def check_session(self, session_id: int) -> core_pb2.CheckSessionResponse: """ Check if a session exists. :param session_id: id of session to check for - :return: True if exists, False otherwise + :return: response with result if session was found """ request = core_pb2.CheckSessionRequest(session_id=session_id) - response = self.stub.CheckSession(request) - return response.result + return self.stub.CheckSession(request) - def get_session(self, session_id: int) -> wrappers.Session: + def get_session(self, session_id: int) -> core_pb2.GetSessionResponse: """ Retrieve a session. :param session_id: id of session - :return: session + :return: response with sessions state, nodes, and links :raises grpc.RpcError: when session doesn't exist """ request = core_pb2.GetSessionRequest(session_id=session_id) - response = self.stub.GetSession(request) - return wrappers.Session.from_proto(response.session) + return self.stub.GetSession(request) + + def get_session_options( + self, session_id: int + ) -> core_pb2.GetSessionOptionsResponse: + """ + Retrieve session options as a dict with id mapping. + + :param session_id: id of session + :return: response with a list of configuration groups + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetSessionOptionsRequest(session_id=session_id) + return self.stub.GetSessionOptions(request) + + def set_session_options( + self, session_id: int, config: Dict[str, str] + ) -> core_pb2.SetSessionOptionsResponse: + """ + Set options for a session. + + :param session_id: id of session + :param config: configuration values to set + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionOptionsRequest( + session_id=session_id, config=config + ) + return self.stub.SetSessionOptions(request) + + def get_session_metadata( + self, session_id: int + ) -> core_pb2.GetSessionMetadataResponse: + """ + Retrieve session metadata as a dict with id mapping. + + :param session_id: id of session + :return: response with metadata dict + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetSessionMetadataRequest(session_id=session_id) + return self.stub.GetSessionMetadata(request) + + def set_session_metadata( + self, session_id: int, config: Dict[str, str] + ) -> core_pb2.SetSessionMetadataResponse: + """ + Set metadata for a session. + + :param session_id: id of session + :param config: configuration values to set + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionMetadataRequest( + session_id=session_id, config=config + ) + return self.stub.SetSessionMetadata(request) + + def get_session_location( + self, session_id: int + ) -> core_pb2.GetSessionLocationResponse: + """ + Get session location. + + :param session_id: id of session + :return: response with session position reference and scale + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetSessionLocationRequest(session_id=session_id) + return self.stub.GetSessionLocation(request) + + def set_session_location( + self, + session_id: int, + x: float = None, + y: float = None, + z: float = None, + lat: float = None, + lon: float = None, + alt: float = None, + scale: float = None, + ) -> core_pb2.SetSessionLocationResponse: + """ + Set session location. + + :param session_id: id of session + :param x: x position + :param y: y position + :param z: z position + :param lat: latitude position + :param lon: longitude position + :param alt: altitude position + :param scale: geo scale + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + location = core_pb2.SessionLocation( + x=x, y=y, z=z, lat=lat, lon=lon, alt=alt, scale=scale + ) + request = core_pb2.SetSessionLocationRequest( + session_id=session_id, location=location + ) + return self.stub.SetSessionLocation(request) + + def set_session_state( + self, session_id: int, state: core_pb2.SessionState + ) -> core_pb2.SetSessionStateResponse: + """ + Set session state. + + :param session_id: id of session + :param state: session state to transition to + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionStateRequest(session_id=session_id, state=state) + return self.stub.SetSessionState(request) + + def set_session_user( + self, session_id: int, user: str + ) -> core_pb2.SetSessionUserResponse: + """ + Set session user, used for helping to find files without full paths. + + :param session_id: id of session + :param user: user to set for session + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionUserRequest(session_id=session_id, user=user) + return self.stub.SetSessionUser(request) + + def add_session_server( + self, session_id: int, name: str, host: str + ) -> core_pb2.AddSessionServerResponse: + """ + Add distributed session server. + + :param session_id: id of session + :param name: name of server to add + :param host: host address to connect to + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.AddSessionServerRequest( + session_id=session_id, name=name, host=host + ) + return self.stub.AddSessionServer(request) def alert( self, session_id: int, - level: wrappers.ExceptionLevel, + level: core_pb2.ExceptionLevel, source: str, text: str, node_id: int = None, - ) -> bool: - """ - Initiate an alert to be broadcast out to all listeners. - - :param session_id: id of session - :param level: alert level - :param source: source of alert - :param text: alert text - :param node_id: node associated with alert - :return: True for success, False otherwise - """ + ) -> core_pb2.SessionAlertResponse: request = core_pb2.SessionAlertRequest( session_id=session_id, - level=level.value, + level=level, source=source, text=text, node_id=node_id, ) - response = self.stub.SessionAlert(request) - return response.result + return self.stub.SessionAlert(request) def events( self, session_id: int, - handler: Callable[[wrappers.Event], None], - events: list[wrappers.EventType] = None, + handler: Callable[[core_pb2.Event], None], + events: List[core_pb2.Event] = None, ) -> grpc.Future: """ Listen for session events. @@ -368,14 +479,11 @@ class CoreGrpcClient: """ request = core_pb2.EventsRequest(session_id=session_id, events=events) stream = self.stub.Events(request) - thread = threading.Thread( - target=event_listener, args=(stream, handler), daemon=True - ) - thread.start() + start_streamer(stream, handler) return stream def throughputs( - self, session_id: int, handler: Callable[[wrappers.ThroughputsEvent], None] + self, session_id: int, handler: Callable[[core_pb2.ThroughputsEvent], None] ) -> grpc.Future: """ Listen for throughput events with information for interfaces and bridges. @@ -387,14 +495,11 @@ class CoreGrpcClient: """ request = core_pb2.ThroughputsRequest(session_id=session_id) stream = self.stub.Throughputs(request) - thread = threading.Thread( - target=throughput_listener, args=(stream, handler), daemon=True - ) - thread.start() + start_streamer(stream, handler) return stream def cpu_usage( - self, delay: int, handler: Callable[[wrappers.CpuUsageEvent], None] + self, delay: int, handler: Callable[[core_pb2.CpuUsageEvent], None] ) -> grpc.Future: """ Listen for cpu usage events with the given repeat delay. @@ -405,130 +510,98 @@ class CoreGrpcClient: """ request = core_pb2.CpuUsageRequest(delay=delay) stream = self.stub.CpuUsage(request) - thread = threading.Thread( - target=cpu_listener, args=(stream, handler), daemon=True - ) - thread.start() + start_streamer(stream, handler) return stream - def add_node(self, session_id: int, node: wrappers.Node, source: str = None) -> int: + def add_node( + self, session_id: int, node: core_pb2.Node, source: str = None + ) -> core_pb2.AddNodeResponse: """ Add node to session. :param session_id: session id :param node: node to add :param source: source application - :return: id of added node + :return: response with node id :raises grpc.RpcError: when session doesn't exist """ request = core_pb2.AddNodeRequest( - session_id=session_id, node=node.to_proto(), source=source + session_id=session_id, node=node, source=source ) - response = self.stub.AddNode(request) - return response.node_id + return self.stub.AddNode(request) - def get_node( - self, session_id: int, node_id: int - ) -> tuple[wrappers.Node, list[wrappers.Interface], list[wrappers.Link]]: + def get_node(self, session_id: int, node_id: int) -> core_pb2.GetNodeResponse: """ Get node details. :param session_id: session id :param node_id: node id - :return: tuple of node and its interfaces + :return: response with node details :raises grpc.RpcError: when session or node doesn't exist """ request = core_pb2.GetNodeRequest(session_id=session_id, node_id=node_id) - response = self.stub.GetNode(request) - node = wrappers.Node.from_proto(response.node) - ifaces = [] - for iface_proto in response.ifaces: - iface = wrappers.Interface.from_proto(iface_proto) - ifaces.append(iface) - links = [] - for link_proto in response.links: - link = wrappers.Link.from_proto(link_proto) - links.append(link) - return node, ifaces, links + return self.stub.GetNode(request) def edit_node( - self, session_id: int, node_id: int, icon: str = None, source: str = None - ) -> bool: + self, + session_id: int, + node_id: int, + position: core_pb2.Position = None, + icon: str = None, + geo: core_pb2.Geo = None, + source: str = None, + ) -> core_pb2.EditNodeResponse: """ Edit a node's icon and/or location, can only use position(x,y) or geo(lon, lat, alt), not both. :param session_id: session id :param node_id: node id + :param position: x,y location for node :param icon: path to icon for gui to use for node + :param geo: lon,lat,alt location for node :param source: application source - :return: True for success, False otherwise + :return: response with result of success or failure :raises grpc.RpcError: when session or node doesn't exist """ request = core_pb2.EditNodeRequest( - session_id=session_id, node_id=node_id, icon=icon, source=source - ) - response = self.stub.EditNode(request) - return response.result - - def move_node( - self, - session_id: int, - node_id: int, - position: wrappers.Position = None, - geo: wrappers.Geo = None, - source: str = None, - ) -> bool: - """ - Move node using provided position or geo location. - - :param session_id: session id - :param node_id: node id - :param position: x,y position to move to - :param geo: geospatial position to move to - :param source: source generating motion - :return: nothing - :raises grpc.RpcError: when session or nodes do not exist - """ - if not position and not geo: - raise CoreError("must provide position or geo to move node") - position = position.to_proto() if position else None - geo = geo.to_proto() if geo else None - request = core_pb2.MoveNodeRequest( session_id=session_id, node_id=node_id, position=position, - geo=geo, + icon=icon, source=source, + geo=geo, ) - response = self.stub.MoveNode(request) - return response.result + return self.stub.EditNode(request) - def move_nodes(self, streamer: MoveNodesStreamer) -> None: + def move_nodes( + self, move_iterator: Iterable[core_pb2.MoveNodesRequest] + ) -> core_pb2.MoveNodesResponse: """ Stream node movements using the provided iterator. - :param streamer: move nodes streamer - :return: nothing + :param move_iterator: iterator for generating node movements + :return: move nodes response :raises grpc.RpcError: when session or nodes do not exist """ - self.stub.MoveNodes(streamer.iter()) + return self.stub.MoveNodes(move_iterator) - def delete_node(self, session_id: int, node_id: int, source: str = None) -> bool: + def delete_node( + self, session_id: int, node_id: int, source: str = None + ) -> core_pb2.DeleteNodeResponse: """ Delete node from session. :param session_id: session id :param node_id: node id :param source: application source - :return: True for success, False otherwise + :return: response with result of success or failure :raises grpc.RpcError: when session doesn't exist """ request = core_pb2.DeleteNodeRequest( session_id=session_id, node_id=node_id, source=source ) - response = self.stub.DeleteNode(request) - return response.result + return self.stub.DeleteNode(request) def node_command( self, @@ -537,7 +610,7 @@ class CoreGrpcClient: command: str, wait: bool = True, shell: bool = False, - ) -> tuple[int, str]: + ) -> core_pb2.NodeCommandResponse: """ Send command to a node and get the output. @@ -546,7 +619,7 @@ class CoreGrpcClient: :param command: command to run on node :param wait: wait for command to complete :param shell: send shell command - :return: returns tuple of return code and output + :return: response with command combined stdout/stderr :raises grpc.RpcError: when session or node doesn't exist """ request = core_pb2.NodeCommandRequest( @@ -556,214 +629,303 @@ class CoreGrpcClient: wait=wait, shell=shell, ) - response = self.stub.NodeCommand(request) - return response.return_code, response.output + return self.stub.NodeCommand(request) - def get_node_terminal(self, session_id: int, node_id: int) -> str: + def get_node_terminal( + self, session_id: int, node_id: int + ) -> core_pb2.GetNodeTerminalResponse: """ Retrieve terminal command string for launching a local terminal. :param session_id: session id :param node_id: node id - :return: node terminal + :return: response with a node terminal command :raises grpc.RpcError: when session or node doesn't exist """ request = core_pb2.GetNodeTerminalRequest( session_id=session_id, node_id=node_id ) - response = self.stub.GetNodeTerminal(request) - return response.terminal + return self.stub.GetNodeTerminal(request) + + def get_node_links( + self, session_id: int, node_id: int + ) -> core_pb2.GetNodeLinksResponse: + """ + Get current links for a node. + + :param session_id: session id + :param node_id: node id + :return: response with a list of links + :raises grpc.RpcError: when session or node doesn't exist + """ + request = core_pb2.GetNodeLinksRequest(session_id=session_id, node_id=node_id) + return self.stub.GetNodeLinks(request) def add_link( - self, session_id: int, link: wrappers.Link, source: str = None - ) -> tuple[bool, wrappers.Interface, wrappers.Interface]: + self, + session_id: int, + node1_id: int, + node2_id: int, + iface1: core_pb2.Interface = None, + iface2: core_pb2.Interface = None, + options: core_pb2.LinkOptions = None, + source: str = None, + ) -> core_pb2.AddLinkResponse: """ Add a link between nodes. :param session_id: session id - :param link: link to add + :param node1_id: node one id + :param node2_id: node two id + :param iface1: node one interface data + :param iface2: node two interface data + :param options: options for link (jitter, bandwidth, etc) :param source: application source - :return: tuple of result and finalized interface values + :return: response with result of success or failure :raises grpc.RpcError: when session or one of the nodes don't exist """ - request = core_pb2.AddLinkRequest( - session_id=session_id, link=link.to_proto(), source=source + link = core_pb2.Link( + node1_id=node1_id, + node2_id=node2_id, + type=core_pb2.LinkType.WIRED, + iface1=iface1, + iface2=iface2, + options=options, ) - response = self.stub.AddLink(request) - iface1 = wrappers.Interface.from_proto(response.iface1) - iface2 = wrappers.Interface.from_proto(response.iface2) - return response.result, iface1, iface2 + request = core_pb2.AddLinkRequest( + session_id=session_id, link=link, source=source + ) + return self.stub.AddLink(request) def edit_link( - self, session_id: int, link: wrappers.Link, source: str = None - ) -> bool: + self, + session_id: int, + node1_id: int, + node2_id: int, + options: core_pb2.LinkOptions, + iface1_id: int = None, + iface2_id: int = None, + source: str = None, + ) -> core_pb2.EditLinkResponse: """ Edit a link between nodes. :param session_id: session id - :param link: link to edit + :param node1_id: node one id + :param node2_id: node two id + :param options: options for link (jitter, bandwidth, etc) + :param iface1_id: node one interface id + :param iface2_id: node two interface id :param source: application source :return: response with result of success or failure :raises grpc.RpcError: when session or one of the nodes don't exist """ - iface1_id = link.iface1.id if link.iface1 else None - iface2_id = link.iface2.id if link.iface2 else None request = core_pb2.EditLinkRequest( session_id=session_id, - node1_id=link.node1_id, - node2_id=link.node2_id, - options=link.options.to_proto(), + node1_id=node1_id, + node2_id=node2_id, + options=options, iface1_id=iface1_id, iface2_id=iface2_id, source=source, ) - response = self.stub.EditLink(request) - return response.result + return self.stub.EditLink(request) def delete_link( - self, session_id: int, link: wrappers.Link, source: str = None - ) -> bool: + self, + session_id: int, + node1_id: int, + node2_id: int, + iface1_id: int = None, + iface2_id: int = None, + source: str = None, + ) -> core_pb2.DeleteLinkResponse: """ Delete a link between nodes. :param session_id: session id - :param link: link to delete + :param node1_id: node one id + :param node2_id: node two id + :param iface1_id: node one interface id + :param iface2_id: node two interface id :param source: application source :return: response with result of success or failure :raises grpc.RpcError: when session doesn't exist """ - iface1_id = link.iface1.id if link.iface1 else None - iface2_id = link.iface2.id if link.iface2 else None request = core_pb2.DeleteLinkRequest( session_id=session_id, - node1_id=link.node1_id, - node2_id=link.node2_id, + node1_id=node1_id, + node2_id=node2_id, iface1_id=iface1_id, iface2_id=iface2_id, source=source, ) - response = self.stub.DeleteLink(request) - return response.result + return self.stub.DeleteLink(request) + + def get_hooks(self, session_id: int) -> core_pb2.GetHooksResponse: + """ + Get all hook scripts. + + :param session_id: session id + :return: response with a list of hooks + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetHooksRequest(session_id=session_id) + return self.stub.GetHooks(request) + + def add_hook( + self, + session_id: int, + state: core_pb2.SessionState, + file_name: str, + file_data: str, + ) -> core_pb2.AddHookResponse: + """ + Add hook scripts. + + :param session_id: session id + :param state: state to trigger hook + :param file_name: name of file for hook script + :param file_data: hook script contents + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + hook = core_pb2.Hook(state=state, file=file_name, data=file_data) + request = core_pb2.AddHookRequest(session_id=session_id, hook=hook) + return self.stub.AddHook(request) + + def get_mobility_configs(self, session_id: int) -> GetMobilityConfigsResponse: + """ + Get all mobility configurations. + + :param session_id: session id + :return: response with a dict of node ids to mobility configurations + :raises grpc.RpcError: when session doesn't exist + """ + request = GetMobilityConfigsRequest(session_id=session_id) + return self.stub.GetMobilityConfigs(request) def get_mobility_config( self, session_id: int, node_id: int - ) -> dict[str, wrappers.ConfigOption]: + ) -> GetMobilityConfigResponse: """ Get mobility configuration for a node. :param session_id: session id :param node_id: node id - :return: dict of config name to options + :return: response with a list of configuration groups :raises grpc.RpcError: when session or node doesn't exist """ request = GetMobilityConfigRequest(session_id=session_id, node_id=node_id) - response = self.stub.GetMobilityConfig(request) - return wrappers.ConfigOption.from_dict(response.config) + return self.stub.GetMobilityConfig(request) def set_mobility_config( - self, session_id: int, node_id: int, config: dict[str, str] - ) -> bool: + self, session_id: int, node_id: int, config: Dict[str, str] + ) -> SetMobilityConfigResponse: """ Set mobility configuration for a node. :param session_id: session id :param node_id: node id :param config: mobility configuration - :return: True for success, False otherwise + :return: response with result of success or failure :raises grpc.RpcError: when session or node doesn't exist """ mobility_config = MobilityConfig(node_id=node_id, config=config) request = SetMobilityConfigRequest( session_id=session_id, mobility_config=mobility_config ) - response = self.stub.SetMobilityConfig(request) - return response.result + return self.stub.SetMobilityConfig(request) def mobility_action( - self, session_id: int, node_id: int, action: wrappers.MobilityAction - ) -> bool: + self, session_id: int, node_id: int, action: ServiceAction + ) -> MobilityActionResponse: """ Send a mobility action for a node. :param session_id: session id :param node_id: node id :param action: action to take - :return: True for success, False otherwise + :return: response with result of success or failure :raises grpc.RpcError: when session or node doesn't exist """ request = MobilityActionRequest( - session_id=session_id, node_id=node_id, action=action.value + session_id=session_id, node_id=node_id, action=action ) - response = self.stub.MobilityAction(request) - return response.result + return self.stub.MobilityAction(request) - def get_config(self) -> wrappers.CoreConfig: + def get_services(self) -> GetServicesResponse: """ - Retrieve the current core configuration values. + Get all currently loaded services. - :return: core configuration + :return: response with a list of services """ - request = GetConfigRequest() - response = self.stub.GetConfig(request) - return wrappers.CoreConfig.from_proto(response) + request = GetServicesRequest() + return self.stub.GetServices(request) - def get_service_defaults(self, session_id: int) -> list[wrappers.ServiceDefault]: + def get_service_defaults(self, session_id: int) -> GetServiceDefaultsResponse: """ Get default services for different default node models. :param session_id: session id - :return: list of service defaults + :return: response with a dict of node model to a list of services :raises grpc.RpcError: when session doesn't exist """ request = GetServiceDefaultsRequest(session_id=session_id) - response = self.stub.GetServiceDefaults(request) - defaults = [] - for default_proto in response.defaults: - default = wrappers.ServiceDefault.from_proto(default_proto) - defaults.append(default) - return defaults + return self.stub.GetServiceDefaults(request) def set_service_defaults( - self, session_id: int, service_defaults: dict[str, list[str]] - ) -> bool: + self, session_id: int, service_defaults: Dict[str, List[str]] + ) -> SetServiceDefaultsResponse: """ Set default services for node models. :param session_id: session id :param service_defaults: node models to lists of services - :return: True for success, False otherwise + :return: response with result of success or failure :raises grpc.RpcError: when session doesn't exist """ defaults = [] - for model in service_defaults: - services = service_defaults[model] - default = ServiceDefaults(model=model, services=services) + for node_type in service_defaults: + services = service_defaults[node_type] + default = ServiceDefaults(node_type=node_type, services=services) defaults.append(default) request = SetServiceDefaultsRequest(session_id=session_id, defaults=defaults) - response = self.stub.SetServiceDefaults(request) - return response.result + return self.stub.SetServiceDefaults(request) + + def get_node_service_configs( + self, session_id: int + ) -> GetNodeServiceConfigsResponse: + """ + Get service data for a node. + + :param session_id: session id + :return: response with all node service configs + :raises grpc.RpcError: when session doesn't exist + """ + request = GetNodeServiceConfigsRequest(session_id=session_id) + return self.stub.GetNodeServiceConfigs(request) def get_node_service( self, session_id: int, node_id: int, service: str - ) -> wrappers.NodeServiceData: + ) -> GetNodeServiceResponse: """ Get service data for a node. :param session_id: session id :param node_id: node id :param service: service name - :return: node service data + :return: response with node service data :raises grpc.RpcError: when session or node doesn't exist """ request = GetNodeServiceRequest( session_id=session_id, node_id=node_id, service=service ) - response = self.stub.GetNodeService(request) - return wrappers.NodeServiceData.from_proto(response.service) + return self.stub.GetNodeService(request) def get_node_service_file( self, session_id: int, node_id: int, service: str, file_name: str - ) -> str: + ) -> GetNodeServiceFileResponse: """ Get a service file for a node. @@ -771,22 +933,74 @@ class CoreGrpcClient: :param node_id: node id :param service: service name :param file_name: file name to get data for - :return: file data + :return: response with file data :raises grpc.RpcError: when session or node doesn't exist """ request = GetNodeServiceFileRequest( session_id=session_id, node_id=node_id, service=service, file=file_name ) - response = self.stub.GetNodeServiceFile(request) - return response.data + return self.stub.GetNodeServiceFile(request) - def service_action( + def set_node_service( self, session_id: int, node_id: int, service: str, - action: wrappers.ServiceAction, - ) -> bool: + files: List[str] = None, + directories: List[str] = None, + startup: List[str] = None, + validate: List[str] = None, + shutdown: List[str] = None, + ) -> SetNodeServiceResponse: + """ + Set service data for a node. + + :param session_id: session id + :param node_id: node id + :param service: service name + :param files: service files + :param directories: service directories + :param startup: startup commands + :param validate: validation commands + :param shutdown: shutdown commands + :return: response with result of success or failure + :raises grpc.RpcError: when session or node doesn't exist + """ + config = ServiceConfig( + node_id=node_id, + service=service, + files=files, + directories=directories, + startup=startup, + validate=validate, + shutdown=shutdown, + ) + request = SetNodeServiceRequest(session_id=session_id, config=config) + return self.stub.SetNodeService(request) + + def set_node_service_file( + self, session_id: int, node_id: int, service: str, file_name: str, data: str + ) -> SetNodeServiceFileResponse: + """ + Set a service file for a node. + + :param session_id: session id + :param node_id: node id + :param service: service name + :param file_name: file name to save + :param data: data to save for file + :return: response with result of success or failure + :raises grpc.RpcError: when session or node doesn't exist + """ + config = ServiceFileConfig( + node_id=node_id, service=service, file=file_name, data=data + ) + request = SetNodeServiceFileRequest(session_id=session_id, config=config) + return self.stub.SetNodeServiceFile(request) + + def service_action( + self, session_id: int, node_id: int, service: str, action: ServiceAction + ) -> ServiceActionResponse: """ Send an action to a service for a node. @@ -795,74 +1009,92 @@ class CoreGrpcClient: :param service: service name :param action: action for service (start, stop, restart, validate) - :return: True for success, False otherwise + :return: response with result of success or failure :raises grpc.RpcError: when session or node doesn't exist """ request = ServiceActionRequest( - session_id=session_id, node_id=node_id, service=service, action=action.value + session_id=session_id, node_id=node_id, service=service, action=action ) - response = self.stub.ServiceAction(request) - return response.result + return self.stub.ServiceAction(request) - def config_service_action( - self, - session_id: int, - node_id: int, - service: str, - action: wrappers.ServiceAction, - ) -> bool: + def get_wlan_configs(self, session_id: int) -> GetWlanConfigsResponse: """ - Send an action to a config service for a node. + Get all wlan configurations. :param session_id: session id - :param node_id: node id - :param service: config service name - :param action: action for service (start, stop, restart, - validate) - :return: True for success, False otherwise - :raises grpc.RpcError: when session or node doesn't exist + :return: response with a dict of node ids to wlan configurations + :raises grpc.RpcError: when session doesn't exist """ - request = ServiceActionRequest( - session_id=session_id, node_id=node_id, service=service, action=action.value - ) - response = self.stub.ConfigServiceAction(request) - return response.result + request = GetWlanConfigsRequest(session_id=session_id) + return self.stub.GetWlanConfigs(request) - def get_wlan_config( - self, session_id: int, node_id: int - ) -> dict[str, wrappers.ConfigOption]: + def get_wlan_config(self, session_id: int, node_id: int) -> GetWlanConfigResponse: """ Get wlan configuration for a node. :param session_id: session id :param node_id: node id - :return: dict of names to options + :return: response with a list of configuration groups :raises grpc.RpcError: when session doesn't exist """ request = GetWlanConfigRequest(session_id=session_id, node_id=node_id) - response = self.stub.GetWlanConfig(request) - return wrappers.ConfigOption.from_dict(response.config) + return self.stub.GetWlanConfig(request) def set_wlan_config( - self, session_id: int, node_id: int, config: dict[str, str] - ) -> bool: + self, session_id: int, node_id: int, config: Dict[str, str] + ) -> SetWlanConfigResponse: """ Set wlan configuration for a node. :param session_id: session id :param node_id: node id :param config: wlan configuration - :return: True for success, False otherwise + :return: response with result of success or failure :raises grpc.RpcError: when session doesn't exist """ wlan_config = WlanConfig(node_id=node_id, config=config) request = SetWlanConfigRequest(session_id=session_id, wlan_config=wlan_config) - response = self.stub.SetWlanConfig(request) - return response.result + return self.stub.SetWlanConfig(request) + + def get_emane_config(self, session_id: int) -> GetEmaneConfigResponse: + """ + Get session emane configuration. + + :param session_id: session id + :return: response with a list of configuration groups + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneConfigRequest(session_id=session_id) + return self.stub.GetEmaneConfig(request) + + def set_emane_config( + self, session_id: int, config: Dict[str, str] + ) -> SetEmaneConfigResponse: + """ + Set session emane configuration. + + :param session_id: session id + :param config: emane configuration + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + request = SetEmaneConfigRequest(session_id=session_id, config=config) + return self.stub.SetEmaneConfig(request) + + def get_emane_models(self, session_id: int) -> GetEmaneModelsResponse: + """ + Get session emane models. + + :param session_id: session id + :return: response with a list of emane models + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneModelsRequest(session_id=session_id) + return self.stub.GetEmaneModels(request) def get_emane_model_config( self, session_id: int, node_id: int, model: str, iface_id: int = -1 - ) -> dict[str, wrappers.ConfigOption]: + ) -> GetEmaneModelConfigResponse: """ Get emane model configuration for a node or a node's interface. @@ -870,33 +1102,53 @@ class CoreGrpcClient: :param node_id: node id :param model: emane model name :param iface_id: node interface id - :return: dict of names to options + :return: response with a list of configuration groups :raises grpc.RpcError: when session doesn't exist """ request = GetEmaneModelConfigRequest( session_id=session_id, node_id=node_id, model=model, iface_id=iface_id ) - response = self.stub.GetEmaneModelConfig(request) - return wrappers.ConfigOption.from_dict(response.config) + return self.stub.GetEmaneModelConfig(request) def set_emane_model_config( - self, session_id: int, emane_model_config: wrappers.EmaneModelConfig - ) -> bool: + self, + session_id: int, + node_id: int, + model: str, + config: Dict[str, str] = None, + iface_id: int = -1, + ) -> SetEmaneModelConfigResponse: """ Set emane model configuration for a node or a node's interface. :param session_id: session id - :param emane_model_config: emane model config to set - :return: True for success, False otherwise + :param node_id: node id + :param model: emane model name + :param config: emane model configuration + :param iface_id: node interface id + :return: response with result of success or failure :raises grpc.RpcError: when session doesn't exist """ - request = SetEmaneModelConfigRequest( - session_id=session_id, emane_model_config=emane_model_config.to_proto() + model_config = EmaneModelConfig( + node_id=node_id, model=model, config=config, iface_id=iface_id ) - response = self.stub.SetEmaneModelConfig(request) - return response.result + request = SetEmaneModelConfigRequest( + session_id=session_id, emane_model_config=model_config + ) + return self.stub.SetEmaneModelConfig(request) - def save_xml(self, session_id: int, file_path: str) -> None: + def get_emane_model_configs(self, session_id: int) -> GetEmaneModelConfigsResponse: + """ + Get all EMANE model configurations for a session. + + :param session_id: session to get emane model configs + :return: response with a dictionary of node/interface ids to configurations + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneModelConfigsRequest(session_id=session_id) + return self.stub.GetEmaneModelConfigs(request) + + def save_xml(self, session_id: int, file_path: str) -> core_pb2.SaveXmlResponse: """ Save the current scenario to an XML file. @@ -910,21 +1162,22 @@ class CoreGrpcClient: with open(file_path, "w") as xml_file: xml_file.write(response.data) - def open_xml(self, file_path: Path, start: bool = False) -> tuple[bool, int]: + def open_xml(self, file_path: str, start: bool = False) -> core_pb2.OpenXmlResponse: """ Load a local scenario XML file to open as a new session. :param file_path: path of scenario XML file - :param start: tuple of result and session id when successful - :return: tuple of result and session id + :param start: True to start session, False otherwise + :return: response with opened session id """ - with file_path.open("r") as f: - data = f.read() - request = core_pb2.OpenXmlRequest(data=data, start=start, file=str(file_path)) - response = self.stub.OpenXml(request) - return response.result, response.session_id + with open(file_path, "r") as xml_file: + data = xml_file.read() + request = core_pb2.OpenXmlRequest(data=data, start=start, file=file_path) + return self.stub.OpenXml(request) - def emane_link(self, session_id: int, nem1: int, nem2: int, linked: bool) -> bool: + def emane_link( + self, session_id: int, nem1: int, nem2: int, linked: bool + ) -> EmaneLinkResponse: """ Helps broadcast wireless link/unlink between EMANE nodes. @@ -932,108 +1185,131 @@ class CoreGrpcClient: :param nem1: first nem for emane link :param nem2: second nem for emane link :param linked: True to link, False to unlink - :return: True for success, False otherwise + :return: get emane link response :raises grpc.RpcError: when session or nodes related to nems do not exist """ request = EmaneLinkRequest( session_id=session_id, nem1=nem1, nem2=nem2, linked=linked ) - response = self.stub.EmaneLink(request) - return response.result + return self.stub.EmaneLink(request) - def get_ifaces(self) -> list[str]: + def get_ifaces(self) -> core_pb2.GetInterfacesResponse: """ Retrieves a list of interfaces available on the host machine that are not a part of a CORE session. - :return: list of interfaces + :return: get interfaces response """ request = core_pb2.GetInterfacesRequest() - response = self.stub.GetInterfaces(request) - return list(response.ifaces) + return self.stub.GetInterfaces(request) + + def get_config_services(self) -> GetConfigServicesResponse: + """ + Retrieve all known config services. + + :return: get config services response + """ + request = GetConfigServicesRequest() + return self.stub.GetConfigServices(request) def get_config_service_defaults( - self, session_id: int, node_id: int, name: str - ) -> wrappers.ConfigServiceDefaults: + self, name: str + ) -> GetConfigServiceDefaultsResponse: """ Retrieves config service default values. - :param session_id: session id to get node from - :param node_id: node id to get service data from :param name: name of service to get defaults for - :return: config service defaults + :return: get config service defaults """ - request = GetConfigServiceDefaultsRequest( - name=name, session_id=session_id, node_id=node_id - ) - response = self.stub.GetConfigServiceDefaults(request) - return wrappers.ConfigServiceDefaults.from_proto(response) + request = GetConfigServiceDefaultsRequest(name=name) + return self.stub.GetConfigServiceDefaults(request) + + def get_node_config_service_configs( + self, session_id: int + ) -> GetNodeConfigServiceConfigsResponse: + """ + Retrieves all node config service configurations for a session. + + :param session_id: session to get config service configurations for + :return: get node config service configs response + :raises grpc.RpcError: when session doesn't exist + """ + request = GetNodeConfigServiceConfigsRequest(session_id=session_id) + return self.stub.GetNodeConfigServiceConfigs(request) def get_node_config_service( self, session_id: int, node_id: int, name: str - ) -> dict[str, str]: + ) -> GetNodeConfigServiceResponse: """ Retrieves information for a specific config service on a node. :param session_id: session node belongs to :param node_id: id of node to get service information from :param name: name of service - :return: config dict of names to values + :return: get node config service response :raises grpc.RpcError: when session or node doesn't exist """ request = GetNodeConfigServiceRequest( session_id=session_id, node_id=node_id, name=name ) - response = self.stub.GetNodeConfigService(request) - return dict(response.config) + return self.stub.GetNodeConfigService(request) - def get_config_service_rendered( - self, session_id: int, node_id: int, name: str - ) -> dict[str, str]: + def get_node_config_services( + self, session_id: int, node_id: int + ) -> GetNodeConfigServicesResponse: """ - Retrieve the rendered config service files for a node. + Retrieves the config services currently assigned to a node. - :param session_id: id of session - :param node_id: id of node + :param session_id: session node belongs to + :param node_id: id of node to get config services for + :return: get node config services response + :raises grpc.RpcError: when session or node doesn't exist + """ + request = GetNodeConfigServicesRequest(session_id=session_id, node_id=node_id) + return self.stub.GetNodeConfigServices(request) + + def set_node_config_service( + self, session_id: int, node_id: int, name: str, config: Dict[str, str] + ) -> SetNodeConfigServiceResponse: + """ + Assigns a config service to a node with the provided configuration. + + :param session_id: session node belongs to + :param node_id: id of node to assign config service to :param name: name of service - :return: dict mapping names of files to rendered data + :param config: service configuration + :return: set node config service response + :raises grpc.RpcError: when session or node doesn't exist """ - request = GetConfigServiceRenderedRequest( - session_id=session_id, node_id=node_id, name=name + request = SetNodeConfigServiceRequest( + session_id=session_id, node_id=node_id, name=name, config=config ) - response = self.stub.GetConfigServiceRendered(request) - return dict(response.rendered) + return self.stub.SetNodeConfigService(request) - def get_emane_event_channel( - self, session_id: int, nem_id: int - ) -> wrappers.EmaneEventChannel: + def get_emane_event_channel(self, session_id: int) -> GetEmaneEventChannelResponse: """ Retrieves the current emane event channel being used for a session. :param session_id: session to get emane event channel for - :param nem_id: nem id for the desired event channel - :return: emane event channel + :return: emane event channel response :raises grpc.RpcError: when session doesn't exist """ - request = GetEmaneEventChannelRequest(session_id=session_id, nem_id=nem_id) - response = self.stub.GetEmaneEventChannel(request) - return wrappers.EmaneEventChannel.from_proto(response) + request = GetEmaneEventChannelRequest(session_id=session_id) + return self.stub.GetEmaneEventChannel(request) - def execute_script(self, script: str, args: str) -> Optional[int]: + def execute_script(self, script: str) -> ExecuteScriptResponse: """ Executes a python script given context of the current CoreEmu object. :param script: script to execute - :param args: arguments to provide to script - :return: create session id for script executed + :return: execute script response """ - request = ExecuteScriptRequest(script=script, args=args) - response = self.stub.ExecuteScript(request) - return response.session_id if response.session_id else None + request = ExecuteScriptRequest(script=script) + return self.stub.ExecuteScript(request) def wlan_link( self, session_id: int, wlan_id: int, node1_id: int, node2_id: int, linked: bool - ) -> bool: + ) -> WlanLinkResponse: """ Links/unlinks nodes on the same WLAN. @@ -1042,7 +1318,7 @@ class CoreGrpcClient: :param node1_id: first node of pair to link/unlink :param node2_id: second node of pair to link/unlin :param linked: True to link, False to unlink - :return: True for success, False otherwise + :return: wlan link response :raises grpc.RpcError: when session or one of the nodes do not exist """ request = WlanLinkRequest( @@ -1052,94 +1328,20 @@ class CoreGrpcClient: node2_id=node2_id, linked=linked, ) - response = self.stub.WlanLink(request) - return response.result + return self.stub.WlanLink(request) - def emane_pathlosses(self, streamer: EmanePathlossesStreamer) -> None: + def emane_pathlosses( + self, pathloss_iterator: Iterable[EmanePathlossesRequest] + ) -> EmanePathlossesResponse: """ Stream EMANE pathloss events. - :param streamer: emane pathlosses streamer - :return: nothing + :param pathloss_iterator: iterator for sending emane pathloss events + :return: emane pathloss response :raises grpc.RpcError: when a pathloss event session or one of the nodes do not exist """ - self.stub.EmanePathlosses(streamer.iter()) - - def linked( - self, - session_id: int, - node1_id: int, - node2_id: int, - iface1_id: int, - iface2_id: int, - linked: bool, - ) -> None: - """ - Link or unlink an existing core wired link. - - :param session_id: session containing the link - :param node1_id: first node in link - :param node2_id: second node in link - :param iface1_id: node1 interface - :param iface2_id: node2 interface - :param linked: True to connect link, False to disconnect - :return: nothing - """ - request = LinkedRequest( - session_id=session_id, - node1_id=node1_id, - node2_id=node2_id, - iface1_id=iface1_id, - iface2_id=iface2_id, - linked=linked, - ) - self.stub.Linked(request) - - def wireless_linked( - self, - session_id: int, - wireless_id: int, - node1_id: int, - node2_id: int, - linked: bool, - ) -> None: - request = WirelessLinkedRequest( - session_id=session_id, - wireless_id=wireless_id, - node1_id=node1_id, - node2_id=node2_id, - linked=linked, - ) - self.stub.WirelessLinked(request) - - def wireless_config( - self, - session_id: int, - wireless_id: int, - node1_id: int, - node2_id: int, - options1: LinkOptions, - options2: LinkOptions = None, - ) -> None: - if options2 is None: - options2 = options1 - request = WirelessConfigRequest( - session_id=session_id, - wireless_id=wireless_id, - node1_id=node1_id, - node2_id=node2_id, - options1=options1.to_proto(), - options2=options2.to_proto(), - ) - self.stub.WirelessConfig(request) - - def get_wireless_config( - self, session_id: int, node_id: int - ) -> dict[str, wrappers.ConfigOption]: - request = GetWirelessConfigRequest(session_id=session_id, node_id=node_id) - response = self.stub.GetWirelessConfig(request) - return wrappers.ConfigOption.from_dict(response.config) + return self.stub.EmanePathlosses(pathloss_iterator) def connect(self) -> None: """ @@ -1163,7 +1365,7 @@ class CoreGrpcClient: self.channel = None @contextmanager - def context_connect(self) -> Generator[None, None, None]: + def context_connect(self) -> Generator: """ Makes a context manager based connection to the server, will close after context ends. diff --git a/daemon/core/api/grpc/clientw.py b/daemon/core/api/grpc/clientw.py new file mode 100644 index 00000000..36ec69ad --- /dev/null +++ b/daemon/core/api/grpc/clientw.py @@ -0,0 +1,1500 @@ +""" +gRpc client for interfacing with CORE. +""" + +import logging +import threading +from contextlib import contextmanager +from queue import Queue +from typing import Any, Callable, Dict, Generator, Iterable, List, Optional, Tuple + +import grpc + +from core.api.grpc import ( + configservices_pb2, + core_pb2, + core_pb2_grpc, + emane_pb2, + mobility_pb2, + services_pb2, + wlan_pb2, + wrappers, +) +from core.api.grpc.configservices_pb2 import ( + GetConfigServiceDefaultsRequest, + GetConfigServicesRequest, + GetNodeConfigServiceConfigsRequest, + GetNodeConfigServiceRequest, + GetNodeConfigServicesRequest, + SetNodeConfigServiceRequest, +) +from core.api.grpc.core_pb2 import ExecuteScriptRequest +from core.api.grpc.emane_pb2 import ( + EmaneLinkRequest, + GetEmaneConfigRequest, + GetEmaneEventChannelRequest, + GetEmaneModelConfigRequest, + GetEmaneModelConfigsRequest, + GetEmaneModelsRequest, + SetEmaneConfigRequest, + SetEmaneModelConfigRequest, +) +from core.api.grpc.mobility_pb2 import ( + GetMobilityConfigRequest, + GetMobilityConfigsRequest, + MobilityActionRequest, + MobilityConfig, + SetMobilityConfigRequest, +) +from core.api.grpc.services_pb2 import ( + GetNodeServiceConfigsRequest, + GetNodeServiceFileRequest, + GetNodeServiceRequest, + GetServiceDefaultsRequest, + GetServicesRequest, + ServiceActionRequest, + ServiceDefaults, + ServiceFileConfig, + SetNodeServiceFileRequest, + SetNodeServiceRequest, + SetServiceDefaultsRequest, +) +from core.api.grpc.wlan_pb2 import ( + GetWlanConfigRequest, + GetWlanConfigsRequest, + SetWlanConfigRequest, + WlanConfig, + WlanLinkRequest, +) +from core.emulator.data import IpPrefixes + + +class MoveNodesStreamer: + def __init__(self, session_id: int = None, source: str = None) -> None: + self.session_id = session_id + self.source = source + self.queue: Queue = Queue() + + def send_position(self, node_id: int, x: float, y: float) -> None: + position = wrappers.Position(x=x, y=y) + request = wrappers.MoveNodesRequest( + session_id=self.session_id, + node_id=node_id, + source=self.source, + position=position, + ) + self.send(request) + + def send_geo(self, node_id: int, lon: float, lat: float, alt: float) -> None: + geo = wrappers.Geo(lon=lon, lat=lat, alt=alt) + request = wrappers.MoveNodesRequest( + session_id=self.session_id, node_id=node_id, source=self.source, geo=geo + ) + self.send(request) + + def send(self, request: wrappers.MoveNodesRequest) -> None: + self.queue.put(request) + + def stop(self) -> None: + self.queue.put(None) + + def next(self) -> Optional[core_pb2.MoveNodesRequest]: + request: Optional[wrappers.MoveNodesRequest] = self.queue.get() + if request: + return request.to_proto() + else: + return request + + def iter(self) -> Iterable: + return iter(self.next, None) + + +class EmanePathlossesStreamer: + def __init__(self) -> None: + self.queue: Queue = Queue() + + def send(self, request: Optional[wrappers.EmanePathlossesRequest]) -> None: + self.queue.put(request) + + def next(self) -> Optional[emane_pb2.EmanePathlossesRequest]: + request: Optional[wrappers.EmanePathlossesRequest] = self.queue.get() + if request: + return request.to_proto() + else: + return request + + def iter(self): + return iter(self.next, None) + + +class InterfaceHelper: + """ + Convenience class to help generate IP4 and IP6 addresses for gRPC clients. + """ + + def __init__(self, ip4_prefix: str = None, ip6_prefix: str = None) -> None: + """ + Creates an InterfaceHelper object. + + :param ip4_prefix: ip4 prefix to use for generation + :param ip6_prefix: ip6 prefix to use for generation + :raises ValueError: when both ip4 and ip6 prefixes have not been provided + """ + self.prefixes: IpPrefixes = IpPrefixes(ip4_prefix, ip6_prefix) + + def create_iface( + self, node_id: int, iface_id: int, name: str = None, mac: str = None + ) -> wrappers.Interface: + """ + Create an interface protobuf object. + + :param node_id: node id to create interface for + :param iface_id: interface id + :param name: name of interface + :param mac: mac address for interface + :return: interface protobuf + """ + iface_data = self.prefixes.gen_iface(node_id, name, mac) + return wrappers.Interface( + id=iface_id, + name=iface_data.name, + ip4=iface_data.ip4, + ip4_mask=iface_data.ip4_mask, + ip6=iface_data.ip6, + ip6_mask=iface_data.ip6_mask, + mac=iface_data.mac, + ) + + +def throughput_listener( + stream: Any, handler: Callable[[wrappers.ThroughputsEvent], None] +) -> None: + """ + Listen for throughput events and provide them to the handler. + + :param stream: grpc stream that will provide events + :param handler: function that handles an event + :return: nothing + """ + try: + for event_proto in stream: + event = wrappers.ThroughputsEvent.from_proto(event_proto) + handler(event) + except grpc.RpcError as e: + if e.code() == grpc.StatusCode.CANCELLED: + logging.debug("throughput stream closed") + else: + logging.exception("throughput stream error") + + +def cpu_listener( + stream: Any, handler: Callable[[wrappers.CpuUsageEvent], None] +) -> None: + """ + Listen for cpu events and provide them to the handler. + + :param stream: grpc stream that will provide events + :param handler: function that handles an event + :return: nothing + """ + try: + for event_proto in stream: + event = wrappers.CpuUsageEvent.from_proto(event_proto) + handler(event) + except grpc.RpcError as e: + if e.code() == grpc.StatusCode.CANCELLED: + logging.debug("cpu stream closed") + else: + logging.exception("cpu stream error") + + +def event_listener(stream: Any, handler: Callable[[wrappers.Event], None]) -> None: + """ + Listen for session events and provide them to the handler. + + :param stream: grpc stream that will provide events + :param handler: function that handles an event + :return: nothing + """ + try: + for event_proto in stream: + event = wrappers.Event.from_proto(event_proto) + handler(event) + except grpc.RpcError as e: + if e.code() == grpc.StatusCode.CANCELLED: + logging.debug("session stream closed") + else: + logging.exception("session stream error") + + +class CoreGrpcClient: + """ + Provides convenience methods for interfacing with the CORE grpc server. + """ + + def __init__(self, address: str = "localhost:50051", proxy: bool = False) -> None: + """ + Creates a CoreGrpcClient instance. + + :param address: grpc server address to connect to + """ + self.address: str = address + self.stub: Optional[core_pb2_grpc.CoreApiStub] = None + self.channel: Optional[grpc.Channel] = None + self.proxy: bool = proxy + + def start_session( + self, session: wrappers.Session, asymmetric_links: List[wrappers.Link] = None + ) -> Tuple[bool, List[str]]: + """ + Start a session. + + :param session: session to start + :param asymmetric_links: link configuration for asymmetric links + :return: tuple of result and exception strings + """ + nodes = [x.to_proto() for x in session.nodes.values()] + links = [x.to_proto() for x in session.links] + if asymmetric_links: + asymmetric_links = [x.to_proto() for x in asymmetric_links] + hooks = [x.to_proto() for x in session.hooks.values()] + emane_config = {k: v.value for k, v in session.emane_config.items()} + emane_model_configs = [] + mobility_configs = [] + wlan_configs = [] + service_configs = [] + service_file_configs = [] + config_service_configs = [] + for node in session.nodes.values(): + for key, config in node.emane_model_configs.items(): + model, iface_id = key + config = wrappers.ConfigOption.to_dict(config) + if iface_id is None: + iface_id = -1 + emane_model_config = emane_pb2.EmaneModelConfig( + node_id=node.id, iface_id=iface_id, model=model, config=config + ) + emane_model_configs.append(emane_model_config) + if node.wlan_config: + config = wrappers.ConfigOption.to_dict(node.wlan_config) + wlan_config = wlan_pb2.WlanConfig(node_id=node.id, config=config) + wlan_configs.append(wlan_config) + if node.mobility_config: + config = wrappers.ConfigOption.to_dict(node.mobility_config) + mobility_config = mobility_pb2.MobilityConfig( + node_id=node.id, config=config + ) + mobility_configs.append(mobility_config) + for name, config in node.service_configs.items(): + service_config = services_pb2.ServiceConfig( + node_id=node.id, + service=name, + directories=config.dirs, + files=config.configs, + startup=config.startup, + validate=config.validate, + shutdown=config.shutdown, + ) + service_configs.append(service_config) + for service, file_configs in node.service_file_configs.items(): + for file, data in file_configs.items(): + service_file_config = services_pb2.ServiceFileConfig( + node_id=node.id, service=service, file=file, data=data + ) + service_file_configs.append(service_file_config) + for name, service_config in node.config_service_configs.items(): + config_service_config = configservices_pb2.ConfigServiceConfig( + node_id=node.id, + name=name, + templates=service_config.templates, + config=service_config.config, + ) + config_service_configs.append(config_service_config) + request = core_pb2.StartSessionRequest( + session_id=session.id, + nodes=nodes, + links=links, + location=session.location.to_proto(), + hooks=hooks, + emane_config=emane_config, + emane_model_configs=emane_model_configs, + wlan_configs=wlan_configs, + mobility_configs=mobility_configs, + service_configs=service_configs, + service_file_configs=service_file_configs, + asymmetric_links=asymmetric_links, + config_service_configs=config_service_configs, + ) + response = self.stub.StartSession(request) + return response.result, list(response.exceptions) + + def stop_session(self, session_id: int) -> bool: + """ + Stop a running session. + + :param session_id: id of session + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.StopSessionRequest(session_id=session_id) + response = self.stub.StopSession(request) + return response.result + + def create_session(self, session_id: int = None) -> int: + """ + Create a session. + + :param session_id: id for session, default is None and one will be created + for you + :return: session id + """ + request = core_pb2.CreateSessionRequest(session_id=session_id) + response = self.stub.CreateSession(request) + return response.session_id + + def delete_session(self, session_id: int) -> bool: + """ + Delete a session. + + :param session_id: id of session + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.DeleteSessionRequest(session_id=session_id) + response = self.stub.DeleteSession(request) + return response.result + + def get_sessions(self) -> List[wrappers.SessionSummary]: + """ + Retrieves all currently known sessions. + + :return: response with a list of currently known session, their state and + number of nodes + """ + response = self.stub.GetSessions(core_pb2.GetSessionsRequest()) + sessions = [] + for session_proto in response.sessions: + session = wrappers.SessionSummary.from_proto(session_proto) + sessions.append(session) + return sessions + + def check_session(self, session_id: int) -> bool: + """ + Check if a session exists. + + :param session_id: id of session to check for + :return: True if exists, False otherwise + """ + request = core_pb2.CheckSessionRequest(session_id=session_id) + response = self.stub.CheckSession(request) + return response.result + + def get_session(self, session_id: int) -> wrappers.Session: + """ + Retrieve a session. + + :param session_id: id of session + :return: session + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetSessionRequest(session_id=session_id) + response = self.stub.GetSession(request) + return wrappers.Session.from_proto(response.session) + + def get_session_options(self, session_id: int) -> Dict[str, wrappers.ConfigOption]: + """ + Retrieve session options as a dict with id mapping. + + :param session_id: id of session + :return: session configuration options + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetSessionOptionsRequest(session_id=session_id) + response = self.stub.GetSessionOptions(request) + return wrappers.ConfigOption.from_dict(response.config) + + def set_session_options(self, session_id: int, config: Dict[str, str]) -> bool: + """ + Set options for a session. + + :param session_id: id of session + :param config: configuration values to set + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionOptionsRequest( + session_id=session_id, config=config + ) + response = self.stub.SetSessionOptions(request) + return response.result + + def get_session_metadata(self, session_id: int) -> Dict[str, str]: + """ + Retrieve session metadata as a dict with id mapping. + + :param session_id: id of session + :return: response with metadata dict + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetSessionMetadataRequest(session_id=session_id) + response = self.stub.GetSessionMetadata(request) + return dict(response.config) + + def set_session_metadata(self, session_id: int, config: Dict[str, str]) -> bool: + """ + Set metadata for a session. + + :param session_id: id of session + :param config: configuration values to set + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionMetadataRequest( + session_id=session_id, config=config + ) + response = self.stub.SetSessionMetadata(request) + return response.result + + def get_session_location(self, session_id: int) -> wrappers.SessionLocation: + """ + Get session location. + + :param session_id: id of session + :return: response with session position reference and scale + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetSessionLocationRequest(session_id=session_id) + response = self.stub.GetSessionLocation(request) + return wrappers.SessionLocation.from_proto(response.location) + + def set_session_location( + self, session_id: int, location: wrappers.SessionLocation + ) -> bool: + """ + Set session location. + + :param session_id: id of session + :param location: session location + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionLocationRequest( + session_id=session_id, location=location.to_proto() + ) + response = self.stub.SetSessionLocation(request) + return response.result + + def set_session_state(self, session_id: int, state: wrappers.SessionState) -> bool: + """ + Set session state. + + :param session_id: id of session + :param state: session state to transition to + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionStateRequest( + session_id=session_id, state=state.value + ) + response = self.stub.SetSessionState(request) + return response.result + + def set_session_user(self, session_id: int, user: str) -> bool: + """ + Set session user, used for helping to find files without full paths. + + :param session_id: id of session + :param user: user to set for session + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SetSessionUserRequest(session_id=session_id, user=user) + response = self.stub.SetSessionUser(request) + return response.result + + def add_session_server(self, session_id: int, name: str, host: str) -> bool: + """ + Add distributed session server. + + :param session_id: id of session + :param name: name of server to add + :param host: host address to connect to + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.AddSessionServerRequest( + session_id=session_id, name=name, host=host + ) + response = self.stub.AddSessionServer(request) + return response.result + + def alert( + self, + session_id: int, + level: wrappers.ExceptionLevel, + source: str, + text: str, + node_id: int = None, + ) -> bool: + """ + Initiate an alert to be broadcast out to all listeners. + + :param session_id: id of session + :param level: alert level + :param source: source of alert + :param text: alert text + :param node_id: node associated with alert + :return: True for success, False otherwise + """ + request = core_pb2.SessionAlertRequest( + session_id=session_id, + level=level.value, + source=source, + text=text, + node_id=node_id, + ) + response = self.stub.SessionAlert(request) + return response.result + + def events( + self, + session_id: int, + handler: Callable[[wrappers.Event], None], + events: List[wrappers.EventType] = None, + ) -> grpc.Future: + """ + Listen for session events. + + :param session_id: id of session + :param handler: handler for received events + :param events: events to listen to, defaults to all + :return: stream processing events, can be used to cancel stream + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.EventsRequest(session_id=session_id, events=events) + stream = self.stub.Events(request) + thread = threading.Thread( + target=event_listener, args=(stream, handler), daemon=True + ) + thread.start() + return stream + + def throughputs( + self, session_id: int, handler: Callable[[wrappers.ThroughputsEvent], None] + ) -> grpc.Future: + """ + Listen for throughput events with information for interfaces and bridges. + + :param session_id: session id + :param handler: handler for every event + :return: stream processing events, can be used to cancel stream + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.ThroughputsRequest(session_id=session_id) + stream = self.stub.Throughputs(request) + thread = threading.Thread( + target=throughput_listener, args=(stream, handler), daemon=True + ) + thread.start() + return stream + + def cpu_usage( + self, delay: int, handler: Callable[[wrappers.CpuUsageEvent], None] + ) -> grpc.Future: + """ + Listen for cpu usage events with the given repeat delay. + + :param delay: delay between receiving events + :param handler: handler for every event + :return: stream processing events, can be used to cancel stream + """ + request = core_pb2.CpuUsageRequest(delay=delay) + stream = self.stub.CpuUsage(request) + thread = threading.Thread( + target=cpu_listener, args=(stream, handler), daemon=True + ) + thread.start() + return stream + + def add_node(self, session_id: int, node: wrappers.Node, source: str = None) -> int: + """ + Add node to session. + + :param session_id: session id + :param node: node to add + :param source: source application + :return: id of added node + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.AddNodeRequest( + session_id=session_id, node=node.to_proto(), source=source + ) + response = self.stub.AddNode(request) + return response.node_id + + def get_node( + self, session_id: int, node_id: int + ) -> Tuple[wrappers.Node, List[wrappers.Interface]]: + """ + Get node details. + + :param session_id: session id + :param node_id: node id + :return: tuple of node and its interfaces + :raises grpc.RpcError: when session or node doesn't exist + """ + request = core_pb2.GetNodeRequest(session_id=session_id, node_id=node_id) + response = self.stub.GetNode(request) + node = wrappers.Node.from_proto(response.node) + ifaces = [] + for iface_proto in response.ifaces: + iface = wrappers.Interface.from_proto(iface_proto) + ifaces.append(iface) + return node, ifaces + + def edit_node( + self, + session_id: int, + node_id: int, + position: wrappers.Position = None, + icon: str = None, + geo: wrappers.Geo = None, + source: str = None, + ) -> bool: + """ + Edit a node's icon and/or location, can only use position(x,y) or + geo(lon, lat, alt), not both. + + :param session_id: session id + :param node_id: node id + :param position: x,y location for node + :param icon: path to icon for gui to use for node + :param geo: lon,lat,alt location for node + :param source: application source + :return: True for success, False otherwise + :raises grpc.RpcError: when session or node doesn't exist + """ + request = core_pb2.EditNodeRequest( + session_id=session_id, + node_id=node_id, + position=position.to_proto(), + icon=icon, + source=source, + geo=geo.to_proto(), + ) + response = self.stub.EditNode(request) + return response.result + + def move_nodes(self, streamer: MoveNodesStreamer) -> None: + """ + Stream node movements using the provided iterator. + + :param streamer: move nodes streamer + :return: nothing + :raises grpc.RpcError: when session or nodes do not exist + """ + self.stub.MoveNodes(streamer.iter()) + + def delete_node(self, session_id: int, node_id: int, source: str = None) -> bool: + """ + Delete node from session. + + :param session_id: session id + :param node_id: node id + :param source: application source + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.DeleteNodeRequest( + session_id=session_id, node_id=node_id, source=source + ) + response = self.stub.DeleteNode(request) + return response.result + + def node_command( + self, + session_id: int, + node_id: int, + command: str, + wait: bool = True, + shell: bool = False, + ) -> Tuple[int, str]: + """ + Send command to a node and get the output. + + :param session_id: session id + :param node_id: node id + :param command: command to run on node + :param wait: wait for command to complete + :param shell: send shell command + :return: returns tuple of return code and output + :raises grpc.RpcError: when session or node doesn't exist + """ + request = core_pb2.NodeCommandRequest( + session_id=session_id, + node_id=node_id, + command=command, + wait=wait, + shell=shell, + ) + response = self.stub.NodeCommand(request) + return response.return_code, response.output + + def get_node_terminal(self, session_id: int, node_id: int) -> str: + """ + Retrieve terminal command string for launching a local terminal. + + :param session_id: session id + :param node_id: node id + :return: node terminal + :raises grpc.RpcError: when session or node doesn't exist + """ + request = core_pb2.GetNodeTerminalRequest( + session_id=session_id, node_id=node_id + ) + response = self.stub.GetNodeTerminal(request) + return response.terminal + + def get_node_links(self, session_id: int, node_id: int) -> List[wrappers.Link]: + """ + Get current links for a node. + + :param session_id: session id + :param node_id: node id + :return: list of links + :raises grpc.RpcError: when session or node doesn't exist + """ + request = core_pb2.GetNodeLinksRequest(session_id=session_id, node_id=node_id) + response = self.stub.GetNodeLinks(request) + links = [] + for link_proto in response.links: + link = wrappers.Link.from_proto(link_proto) + links.append(link) + return links + + def add_link( + self, session_id: int, link: wrappers.Link, source: str = None + ) -> Tuple[bool, wrappers.Interface, wrappers.Interface]: + """ + Add a link between nodes. + + :param session_id: session id + :param link: link to add + :param source: application source + :return: tuple of result and finalized interface values + :raises grpc.RpcError: when session or one of the nodes don't exist + """ + request = core_pb2.AddLinkRequest( + session_id=session_id, link=link.to_proto(), source=source + ) + response = self.stub.AddLink(request) + iface1 = wrappers.Interface.from_proto(response.iface1) + iface2 = wrappers.Interface.from_proto(response.iface2) + return response.result, iface1, iface2 + + def edit_link( + self, session_id: int, link: wrappers.Link, source: str = None + ) -> bool: + """ + Edit a link between nodes. + + :param session_id: session id + :param link: link to edit + :param source: application source + :return: response with result of success or failure + :raises grpc.RpcError: when session or one of the nodes don't exist + """ + iface1_id = link.iface1.id if link.iface1 else None + iface2_id = link.iface2.id if link.iface2 else None + request = core_pb2.EditLinkRequest( + session_id=session_id, + node1_id=link.node1_id, + node2_id=link.node2_id, + options=link.options.to_proto(), + iface1_id=iface1_id, + iface2_id=iface2_id, + source=source, + ) + response = self.stub.EditLink(request) + return response.result + + def delete_link( + self, session_id: int, link: wrappers.Link, source: str = None + ) -> bool: + """ + Delete a link between nodes. + + :param session_id: session id + :param link: link to delete + :param source: application source + :return: response with result of success or failure + :raises grpc.RpcError: when session doesn't exist + """ + iface1_id = link.iface1.id if link.iface1 else None + iface2_id = link.iface2.id if link.iface2 else None + request = core_pb2.DeleteLinkRequest( + session_id=session_id, + node1_id=link.node1_id, + node2_id=link.node2_id, + iface1_id=iface1_id, + iface2_id=iface2_id, + source=source, + ) + response = self.stub.DeleteLink(request) + return response.result + + def get_hooks(self, session_id: int) -> List[wrappers.Hook]: + """ + Get all hook scripts. + + :param session_id: session id + :return: list of hooks + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.GetHooksRequest(session_id=session_id) + response = self.stub.GetHooks(request) + hooks = [] + for hook_proto in response.hooks: + hook = wrappers.Hook.from_proto(hook_proto) + hooks.append(hook) + return hooks + + def add_hook( + self, + session_id: int, + state: wrappers.SessionState, + file_name: str, + file_data: str, + ) -> bool: + """ + Add hook scripts. + + :param session_id: session id + :param state: state to trigger hook + :param file_name: name of file for hook script + :param file_data: hook script contents + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + hook = core_pb2.Hook(state=state.value, file=file_name, data=file_data) + request = core_pb2.AddHookRequest(session_id=session_id, hook=hook) + response = self.stub.AddHook(request) + return response.result + + def get_mobility_configs( + self, session_id: int + ) -> Dict[int, Dict[str, wrappers.ConfigOption]]: + """ + Get all mobility configurations. + + :param session_id: session id + :return: dict of node id to mobility configuration dict + :raises grpc.RpcError: when session doesn't exist + """ + request = GetMobilityConfigsRequest(session_id=session_id) + response = self.stub.GetMobilityConfigs(request) + configs = {} + for node_id, mapped_config in response.configs.items(): + configs[node_id] = wrappers.ConfigOption.from_dict(mapped_config.config) + return configs + + def get_mobility_config( + self, session_id: int, node_id: int + ) -> Dict[str, wrappers.ConfigOption]: + """ + Get mobility configuration for a node. + + :param session_id: session id + :param node_id: node id + :return: dict of config name to options + :raises grpc.RpcError: when session or node doesn't exist + """ + request = GetMobilityConfigRequest(session_id=session_id, node_id=node_id) + response = self.stub.GetMobilityConfig(request) + return wrappers.ConfigOption.from_dict(response.config) + + def set_mobility_config( + self, session_id: int, node_id: int, config: Dict[str, str] + ) -> bool: + """ + Set mobility configuration for a node. + + :param session_id: session id + :param node_id: node id + :param config: mobility configuration + :return: True for success, False otherwise + :raises grpc.RpcError: when session or node doesn't exist + """ + mobility_config = MobilityConfig(node_id=node_id, config=config) + request = SetMobilityConfigRequest( + session_id=session_id, mobility_config=mobility_config + ) + response = self.stub.SetMobilityConfig(request) + return response.result + + def mobility_action( + self, session_id: int, node_id: int, action: wrappers.MobilityAction + ) -> bool: + """ + Send a mobility action for a node. + + :param session_id: session id + :param node_id: node id + :param action: action to take + :return: True for success, False otherwise + :raises grpc.RpcError: when session or node doesn't exist + """ + request = MobilityActionRequest( + session_id=session_id, node_id=node_id, action=action.value + ) + response = self.stub.MobilityAction(request) + return response.result + + def get_services(self) -> List[wrappers.Service]: + """ + Get all currently loaded services. + + :return: list of services, name and groups only + """ + request = GetServicesRequest() + response = self.stub.GetServices(request) + services = [] + for service_proto in response.services: + service = wrappers.Service.from_proto(service_proto) + services.append(service) + return services + + def get_service_defaults(self, session_id: int) -> List[wrappers.ServiceDefault]: + """ + Get default services for different default node models. + + :param session_id: session id + :return: list of service defaults + :raises grpc.RpcError: when session doesn't exist + """ + request = GetServiceDefaultsRequest(session_id=session_id) + response = self.stub.GetServiceDefaults(request) + defaults = [] + for default_proto in response.defaults: + default = wrappers.ServiceDefault.from_proto(default_proto) + defaults.append(default) + return defaults + + def set_service_defaults( + self, session_id: int, service_defaults: Dict[str, List[str]] + ) -> bool: + """ + Set default services for node models. + + :param session_id: session id + :param service_defaults: node models to lists of services + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + defaults = [] + for node_type in service_defaults: + services = service_defaults[node_type] + default = ServiceDefaults(node_type=node_type, services=services) + defaults.append(default) + request = SetServiceDefaultsRequest(session_id=session_id, defaults=defaults) + response = self.stub.SetServiceDefaults(request) + return response.result + + def get_node_service_configs( + self, session_id: int + ) -> List[wrappers.NodeServiceData]: + """ + Get service data for a node. + + :param session_id: session id + :return: list of node service data + :raises grpc.RpcError: when session doesn't exist + """ + request = GetNodeServiceConfigsRequest(session_id=session_id) + response = self.stub.GetNodeServiceConfigs(request) + node_services = [] + for service_proto in response.configs: + node_service = wrappers.NodeServiceData.from_proto(service_proto) + node_services.append(node_service) + return node_services + + def get_node_service( + self, session_id: int, node_id: int, service: str + ) -> wrappers.NodeServiceData: + """ + Get service data for a node. + + :param session_id: session id + :param node_id: node id + :param service: service name + :return: node service data + :raises grpc.RpcError: when session or node doesn't exist + """ + request = GetNodeServiceRequest( + session_id=session_id, node_id=node_id, service=service + ) + response = self.stub.GetNodeService(request) + return wrappers.NodeServiceData.from_proto(response.service) + + def get_node_service_file( + self, session_id: int, node_id: int, service: str, file_name: str + ) -> str: + """ + Get a service file for a node. + + :param session_id: session id + :param node_id: node id + :param service: service name + :param file_name: file name to get data for + :return: file data + :raises grpc.RpcError: when session or node doesn't exist + """ + request = GetNodeServiceFileRequest( + session_id=session_id, node_id=node_id, service=service, file=file_name + ) + response = self.stub.GetNodeServiceFile(request) + return response.data + + def set_node_service( + self, session_id: int, service_config: wrappers.ServiceConfig + ) -> bool: + """ + Set service data for a node. + + :param session_id: session id + :param service_config: service configuration for a node + :return: True for success, False otherwise + :raises grpc.RpcError: when session or node doesn't exist + """ + request = SetNodeServiceRequest( + session_id=session_id, config=service_config.to_proto() + ) + response = self.stub.SetNodeService(request) + return response.result + + def set_node_service_file( + self, session_id: int, node_id: int, service: str, file_name: str, data: str + ) -> bool: + """ + Set a service file for a node. + + :param session_id: session id + :param node_id: node id + :param service: service name + :param file_name: file name to save + :param data: data to save for file + :return: True for success, False otherwise + :raises grpc.RpcError: when session or node doesn't exist + """ + config = ServiceFileConfig( + node_id=node_id, service=service, file=file_name, data=data + ) + request = SetNodeServiceFileRequest(session_id=session_id, config=config) + response = self.stub.SetNodeServiceFile(request) + return response.result + + def service_action( + self, + session_id: int, + node_id: int, + service: str, + action: wrappers.ServiceAction, + ) -> bool: + """ + Send an action to a service for a node. + + :param session_id: session id + :param node_id: node id + :param service: service name + :param action: action for service (start, stop, restart, + validate) + :return: True for success, False otherwise + :raises grpc.RpcError: when session or node doesn't exist + """ + request = ServiceActionRequest( + session_id=session_id, node_id=node_id, service=service, action=action.value + ) + response = self.stub.ServiceAction(request) + return response.result + + def get_wlan_configs( + self, session_id: int + ) -> Dict[int, Dict[str, wrappers.ConfigOption]]: + """ + Get all wlan configurations. + + :param session_id: session id + :return: dict of node ids to dict of names to options + :raises grpc.RpcError: when session doesn't exist + """ + request = GetWlanConfigsRequest(session_id=session_id) + response = self.stub.GetWlanConfigs(request) + configs = {} + for node_id, mapped_config in response.configs.items(): + configs[node_id] = wrappers.ConfigOption.from_dict(mapped_config.config) + return configs + + def get_wlan_config( + self, session_id: int, node_id: int + ) -> Dict[str, wrappers.ConfigOption]: + """ + Get wlan configuration for a node. + + :param session_id: session id + :param node_id: node id + :return: dict of names to options + :raises grpc.RpcError: when session doesn't exist + """ + request = GetWlanConfigRequest(session_id=session_id, node_id=node_id) + response = self.stub.GetWlanConfig(request) + return wrappers.ConfigOption.from_dict(response.config) + + def set_wlan_config( + self, session_id: int, node_id: int, config: Dict[str, str] + ) -> bool: + """ + Set wlan configuration for a node. + + :param session_id: session id + :param node_id: node id + :param config: wlan configuration + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + wlan_config = WlanConfig(node_id=node_id, config=config) + request = SetWlanConfigRequest(session_id=session_id, wlan_config=wlan_config) + response = self.stub.SetWlanConfig(request) + return response.result + + def get_emane_config(self, session_id: int) -> Dict[str, wrappers.ConfigOption]: + """ + Get session emane configuration. + + :param session_id: session id + :return: response with a list of configuration groups + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneConfigRequest(session_id=session_id) + response = self.stub.GetEmaneConfig(request) + return wrappers.ConfigOption.from_dict(response.config) + + def set_emane_config(self, session_id: int, config: Dict[str, str]) -> bool: + """ + Set session emane configuration. + + :param session_id: session id + :param config: emane configuration + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = SetEmaneConfigRequest(session_id=session_id, config=config) + response = self.stub.SetEmaneConfig(request) + return response.result + + def get_emane_models(self, session_id: int) -> List[str]: + """ + Get session emane models. + + :param session_id: session id + :return: list of emane models + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneModelsRequest(session_id=session_id) + response = self.stub.GetEmaneModels(request) + return list(response.models) + + def get_emane_model_config( + self, session_id: int, node_id: int, model: str, iface_id: int = -1 + ) -> Dict[str, wrappers.ConfigOption]: + """ + Get emane model configuration for a node or a node's interface. + + :param session_id: session id + :param node_id: node id + :param model: emane model name + :param iface_id: node interface id + :return: dict of names to options + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneModelConfigRequest( + session_id=session_id, node_id=node_id, model=model, iface_id=iface_id + ) + response = self.stub.GetEmaneModelConfig(request) + return wrappers.ConfigOption.from_dict(response.config) + + def set_emane_model_config( + self, session_id: int, emane_model_config: wrappers.EmaneModelConfig + ) -> bool: + """ + Set emane model configuration for a node or a node's interface. + + :param session_id: session id + :param emane_model_config: emane model config to set + :return: True for success, False otherwise + :raises grpc.RpcError: when session doesn't exist + """ + request = SetEmaneModelConfigRequest( + session_id=session_id, emane_model_config=emane_model_config.to_proto() + ) + response = self.stub.SetEmaneModelConfig(request) + return response.result + + def get_emane_model_configs( + self, session_id: int + ) -> List[wrappers.EmaneModelConfig]: + """ + Get all EMANE model configurations for a session. + + :param session_id: session to get emane model configs + :return: list of emane model configs + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneModelConfigsRequest(session_id=session_id) + response = self.stub.GetEmaneModelConfigs(request) + configs = [] + for config_proto in response.configs: + config = wrappers.EmaneModelConfig.from_proto(config_proto) + configs.append(config) + return configs + + def save_xml(self, session_id: int, file_path: str) -> None: + """ + Save the current scenario to an XML file. + + :param session_id: session to save xml file for + :param file_path: local path to save scenario XML file to + :return: nothing + :raises grpc.RpcError: when session doesn't exist + """ + request = core_pb2.SaveXmlRequest(session_id=session_id) + response = self.stub.SaveXml(request) + with open(file_path, "w") as xml_file: + xml_file.write(response.data) + + def open_xml(self, file_path: str, start: bool = False) -> Tuple[bool, int]: + """ + Load a local scenario XML file to open as a new session. + + :param file_path: path of scenario XML file + :param start: tuple of result and session id when successful + :return: response with opened session id + """ + with open(file_path, "r") as xml_file: + data = xml_file.read() + request = core_pb2.OpenXmlRequest(data=data, start=start, file=file_path) + response = self.stub.OpenXml(request) + return response.result, response.session_id + + def emane_link(self, session_id: int, nem1: int, nem2: int, linked: bool) -> bool: + """ + Helps broadcast wireless link/unlink between EMANE nodes. + + :param session_id: session to emane link + :param nem1: first nem for emane link + :param nem2: second nem for emane link + :param linked: True to link, False to unlink + :return: True for success, False otherwise + :raises grpc.RpcError: when session or nodes related to nems do not exist + """ + request = EmaneLinkRequest( + session_id=session_id, nem1=nem1, nem2=nem2, linked=linked + ) + response = self.stub.EmaneLink(request) + return response.result + + def get_ifaces(self) -> List[str]: + """ + Retrieves a list of interfaces available on the host machine that are not + a part of a CORE session. + + :return: list of interfaces + """ + request = core_pb2.GetInterfacesRequest() + response = self.stub.GetInterfaces(request) + return list(response.ifaces) + + def get_config_services(self) -> List[wrappers.ConfigService]: + """ + Retrieve all known config services. + + :return: list of config services + """ + request = GetConfigServicesRequest() + response = self.stub.GetConfigServices(request) + services = [] + for service_proto in response.services: + service = wrappers.ConfigService.from_proto(service_proto) + services.append(service) + return services + + def get_config_service_defaults(self, name: str) -> wrappers.ConfigServiceDefaults: + """ + Retrieves config service default values. + + :param name: name of service to get defaults for + :return: config service defaults + """ + request = GetConfigServiceDefaultsRequest(name=name) + response = self.stub.GetConfigServiceDefaults(request) + return wrappers.ConfigServiceDefaults.from_proto(response) + + def get_node_config_service_configs( + self, session_id: int + ) -> List[wrappers.ConfigServiceConfig]: + """ + Retrieves all node config service configurations for a session. + + :param session_id: session to get config service configurations for + :return: list of node config service configs + :raises grpc.RpcError: when session doesn't exist + """ + request = GetNodeConfigServiceConfigsRequest(session_id=session_id) + response = self.stub.GetNodeConfigServiceConfigs(request) + configs = [] + for config_proto in response.configs: + config = wrappers.ConfigServiceConfig.from_proto(config_proto) + configs.append(config) + return configs + + def get_node_config_service( + self, session_id: int, node_id: int, name: str + ) -> Dict[str, str]: + """ + Retrieves information for a specific config service on a node. + + :param session_id: session node belongs to + :param node_id: id of node to get service information from + :param name: name of service + :return: config dict of names to values + :raises grpc.RpcError: when session or node doesn't exist + """ + request = GetNodeConfigServiceRequest( + session_id=session_id, node_id=node_id, name=name + ) + response = self.stub.GetNodeConfigService(request) + return dict(response.config) + + def get_node_config_services(self, session_id: int, node_id: int) -> List[str]: + """ + Retrieves the config services currently assigned to a node. + + :param session_id: session node belongs to + :param node_id: id of node to get config services for + :return: list of config services + :raises grpc.RpcError: when session or node doesn't exist + """ + request = GetNodeConfigServicesRequest(session_id=session_id, node_id=node_id) + response = self.stub.GetNodeConfigServices(request) + return list(response.services) + + def set_node_config_service( + self, session_id: int, node_id: int, name: str, config: Dict[str, str] + ) -> bool: + """ + Assigns a config service to a node with the provided configuration. + + :param session_id: session node belongs to + :param node_id: id of node to assign config service to + :param name: name of service + :param config: service configuration + :return: True for success, False otherwise + :raises grpc.RpcError: when session or node doesn't exist + """ + request = SetNodeConfigServiceRequest( + session_id=session_id, node_id=node_id, name=name, config=config + ) + response = self.stub.SetNodeConfigService(request) + return response.result + + def get_emane_event_channel(self, session_id: int) -> wrappers.EmaneEventChannel: + """ + Retrieves the current emane event channel being used for a session. + + :param session_id: session to get emane event channel for + :return: emane event channel + :raises grpc.RpcError: when session doesn't exist + """ + request = GetEmaneEventChannelRequest(session_id=session_id) + response = self.stub.GetEmaneEventChannel(request) + return wrappers.EmaneEventChannel.from_proto(response) + + def execute_script(self, script: str) -> Optional[int]: + """ + Executes a python script given context of the current CoreEmu object. + + :param script: script to execute + :return: create session id for script executed + """ + request = ExecuteScriptRequest(script=script) + response = self.stub.ExecuteScript(request) + return response.session_id if response.session_id else None + + def wlan_link( + self, session_id: int, wlan_id: int, node1_id: int, node2_id: int, linked: bool + ) -> bool: + """ + Links/unlinks nodes on the same WLAN. + + :param session_id: session id containing wlan and nodes + :param wlan_id: wlan nodes must belong to + :param node1_id: first node of pair to link/unlink + :param node2_id: second node of pair to link/unlin + :param linked: True to link, False to unlink + :return: True for success, False otherwise + :raises grpc.RpcError: when session or one of the nodes do not exist + """ + request = WlanLinkRequest( + session_id=session_id, + wlan=wlan_id, + node1_id=node1_id, + node2_id=node2_id, + linked=linked, + ) + response = self.stub.WlanLink(request) + return response.result + + def emane_pathlosses(self, streamer: EmanePathlossesStreamer) -> None: + """ + Stream EMANE pathloss events. + + :param streamer: emane pathlosses streamer + :return: nothing + :raises grpc.RpcError: when a pathloss event session or one of the nodes do not + exist + """ + self.stub.EmanePathlosses(streamer.iter()) + + def connect(self) -> None: + """ + Open connection to server, must be closed manually. + + :return: nothing + """ + self.channel = grpc.insecure_channel( + self.address, options=[("grpc.enable_http_proxy", self.proxy)] + ) + self.stub = core_pb2_grpc.CoreApiStub(self.channel) + + def close(self) -> None: + """ + Close currently opened server channel connection. + + :return: nothing + """ + if self.channel: + self.channel.close() + self.channel = None + + @contextmanager + def context_connect(self) -> Generator: + """ + Makes a context manager based connection to the server, will close after + context ends. + + :return: nothing + """ + try: + self.connect() + yield + finally: + self.close() diff --git a/daemon/core/api/grpc/events.py b/daemon/core/api/grpc/events.py index 65a20296..aff3c5e5 100644 --- a/daemon/core/api/grpc/events.py +++ b/daemon/core/api/grpc/events.py @@ -1,10 +1,9 @@ import logging -from collections.abc import Iterable from queue import Empty, Queue -from typing import Optional +from typing import Iterable, Optional -from core.api.grpc import core_pb2, grpcutils -from core.api.grpc.grpcutils import convert_link_data +from core.api.grpc import core_pb2 +from core.api.grpc.grpcutils import convert_link from core.emulator.data import ( ConfigData, EventData, @@ -15,21 +14,29 @@ from core.emulator.data import ( ) from core.emulator.session import Session -logger = logging.getLogger(__name__) - -def handle_node_event(session: Session, node_data: NodeData) -> core_pb2.Event: +def handle_node_event(node_data: NodeData) -> core_pb2.Event: """ Handle node event when there is a node event - :param session: session node is from :param node_data: node data :return: node event that contains node id, name, model, position, and services """ node = node_data.node - emane_configs = grpcutils.get_emane_model_configs_dict(session) - node_emane_configs = emane_configs.get(node.id, []) - node_proto = grpcutils.get_node_proto(session, node, node_emane_configs) + x, y, _ = node.position.get() + position = core_pb2.Position(x=x, y=y) + lon, lat, alt = node.position.get_geo() + geo = core_pb2.Geo(lon=lon, lat=lat, alt=alt) + services = [x.name for x in node.services] + node_proto = core_pb2.Node( + id=node.id, + name=node.name, + model=node.type, + icon=node.icon, + position=position, + geo=geo, + services=services, + ) message_type = node_data.message_type.value node_event = core_pb2.NodeEvent(message_type=message_type, node=node_proto) return core_pb2.Event(node_event=node_event, source=node_data.source) @@ -42,7 +49,7 @@ def handle_link_event(link_data: LinkData) -> core_pb2.Event: :param link_data: link data :return: link event that has message type and link information """ - link = convert_link_data(link_data) + link = convert_link(link_data) message_type = link_data.message_type.value link_event = core_pb2.LinkEvent(message_type=message_type, link=link) return core_pb2.Event(link_event=link_event, source=link_data.source) @@ -180,7 +187,7 @@ class EventStreamer: try: data = self.queue.get(timeout=1) if isinstance(data, NodeData): - event = handle_node_event(self.session, data) + event = handle_node_event(data) elif isinstance(data, LinkData): event = handle_link_event(data) elif isinstance(data, EventData): @@ -192,7 +199,7 @@ class EventStreamer: elif isinstance(data, FileData): event = handle_file_event(data) else: - logger.error("unknown event: %s", data) + logging.error("unknown event: %s", data) except Empty: pass if event: diff --git a/daemon/core/api/grpc/grpcutils.py b/daemon/core/api/grpc/grpcutils.py index f89144e4..145f7029 100644 --- a/daemon/core/api/grpc/grpcutils.py +++ b/daemon/core/api/grpc/grpcutils.py @@ -1,15 +1,16 @@ import logging import time from pathlib import Path -from typing import Any, Optional, Union +from typing import Any, Dict, List, Tuple, Type, Union import grpc from grpc import ServicerContext from core import utils -from core.api.grpc import common_pb2, core_pb2, wrappers +from core.api.grpc import common_pb2, core_pb2 +from core.api.grpc.common_pb2 import MappedConfig from core.api.grpc.configservices_pb2 import ConfigServiceConfig -from core.api.grpc.emane_pb2 import NodeEmaneConfig +from core.api.grpc.emane_pb2 import GetEmaneModelConfig from core.api.grpc.services_pb2 import ( NodeServiceConfig, NodeServiceData, @@ -17,30 +18,17 @@ from core.api.grpc.services_pb2 import ( ServiceDefaults, ) from core.config import ConfigurableOptions -from core.emane.nodes import EmaneNet, EmaneOptions -from core.emulator.data import InterfaceData, LinkData, LinkOptions +from core.emane.nodes import EmaneNet +from core.emulator.data import InterfaceData, LinkData, LinkOptions, NodeOptions from core.emulator.enumerations import LinkTypes, NodeTypes -from core.emulator.links import CoreLink from core.emulator.session import Session from core.errors import CoreError from core.location.mobility import BasicRangeModel, Ns2ScriptedMobility -from core.nodes.base import ( - CoreNode, - CoreNodeBase, - CoreNodeOptions, - NodeBase, - NodeOptions, - Position, -) -from core.nodes.docker import DockerNode, DockerOptions +from core.nodes.base import CoreNode, CoreNodeBase, NodeBase from core.nodes.interface import CoreInterface -from core.nodes.lxd import LxcNode, LxcOptions -from core.nodes.network import CoreNetwork, CtrlNet, PtpNet, WlanNode -from core.nodes.podman import PodmanNode, PodmanOptions -from core.nodes.wireless import WirelessNode +from core.nodes.network import WlanNode from core.services.coreservices import CoreService -logger = logging.getLogger(__name__) WORKERS = 10 @@ -63,33 +51,33 @@ class CpuUsage: return (total_diff - idle_diff) / total_diff -def add_node_data( - _class: type[NodeBase], node_proto: core_pb2.Node -) -> tuple[Position, NodeOptions]: +def add_node_data(node_proto: core_pb2.Node) -> Tuple[NodeTypes, int, NodeOptions]: """ Convert node protobuf message to data for creating a node. - :param _class: node class to create options from :param node_proto: node proto message :return: node type, id, and options """ - options = _class.create_options() - options.icon = node_proto.icon - options.canvas = node_proto.canvas - if isinstance(options, CoreNodeOptions): - options.model = node_proto.model - options.services = node_proto.services - options.config_services = node_proto.config_services - if isinstance(options, EmaneOptions): - options.emane_model = node_proto.emane - if isinstance(options, (DockerOptions, LxcOptions, PodmanOptions)): - options.image = node_proto.image - position = Position() - position.set(node_proto.position.x, node_proto.position.y) + _id = node_proto.id + _type = NodeTypes(node_proto.type) + options = NodeOptions( + name=node_proto.name, + model=node_proto.model, + icon=node_proto.icon, + image=node_proto.image, + services=node_proto.services, + config_services=node_proto.config_services, + ) + if node_proto.emane: + options.emane = node_proto.emane + if node_proto.server: + options.server = node_proto.server + position = node_proto.position + options.set_position(position.x, position.y) if node_proto.HasField("geo"): geo = node_proto.geo - position.set_geo(geo.lon, geo.lat, geo.alt) - return position, options + options.set_location(geo.lat, geo.lon, geo.alt) + return _type, _id, options def link_iface(iface_proto: core_pb2.Interface) -> InterfaceData: @@ -118,8 +106,8 @@ def link_iface(iface_proto: core_pb2.Interface) -> InterfaceData: def add_link_data( - link_proto: core_pb2.Link, -) -> tuple[InterfaceData, InterfaceData, LinkOptions]: + link_proto: core_pb2.Link +) -> Tuple[InterfaceData, InterfaceData, LinkOptions, LinkTypes]: """ Convert link proto to link interfaces and options data. @@ -128,6 +116,7 @@ def add_link_data( """ iface1_data = link_iface(link_proto.iface1) iface2_data = link_iface(link_proto.iface2) + link_type = LinkTypes(link_proto.type) options = LinkOptions() options_proto = link_proto.options if options_proto: @@ -142,12 +131,12 @@ def add_link_data( options.buffer = options_proto.buffer options.unidirectional = options_proto.unidirectional options.key = options_proto.key - return iface1_data, iface2_data, options + return iface1_data, iface2_data, options, link_type def create_nodes( - session: Session, node_protos: list[core_pb2.Node] -) -> tuple[list[NodeBase], list[Exception]]: + session: Session, node_protos: List[core_pb2.Node] +) -> Tuple[List[NodeBase], List[Exception]]: """ Create nodes using a thread pool and wait for completion. @@ -157,28 +146,20 @@ def create_nodes( """ funcs = [] for node_proto in node_protos: - _type = NodeTypes(node_proto.type) + _type, _id, options = add_node_data(node_proto) _class = session.get_node_class(_type) - position, options = add_node_data(_class, node_proto) - args = ( - _class, - node_proto.id or None, - node_proto.name or None, - node_proto.server or None, - position, - options, - ) + args = (_class, _id, options) funcs.append((session.add_node, args, {})) start = time.monotonic() results, exceptions = utils.threadpool(funcs) total = time.monotonic() - start - logger.debug("grpc created nodes time: %s", total) + logging.debug("grpc created nodes time: %s", total) return results, exceptions def create_links( - session: Session, link_protos: list[core_pb2.Link] -) -> tuple[list[NodeBase], list[Exception]]: + session: Session, link_protos: List[core_pb2.Link] +) -> Tuple[List[NodeBase], List[Exception]]: """ Create links using a thread pool and wait for completion. @@ -190,19 +171,19 @@ def create_links( for link_proto in link_protos: node1_id = link_proto.node1_id node2_id = link_proto.node2_id - iface1, iface2, options = add_link_data(link_proto) - args = (node1_id, node2_id, iface1, iface2, options) + iface1, iface2, options, link_type = add_link_data(link_proto) + args = (node1_id, node2_id, iface1, iface2, options, link_type) funcs.append((session.add_link, args, {})) start = time.monotonic() results, exceptions = utils.threadpool(funcs) total = time.monotonic() - start - logger.debug("grpc created links time: %s", total) + logging.debug("grpc created links time: %s", total) return results, exceptions def edit_links( - session: Session, link_protos: list[core_pb2.Link] -) -> tuple[list[None], list[Exception]]: + session: Session, link_protos: List[core_pb2.Link] +) -> Tuple[List[None], List[Exception]]: """ Edit links using a thread pool and wait for completion. @@ -214,13 +195,13 @@ def edit_links( for link_proto in link_protos: node1_id = link_proto.node1_id node2_id = link_proto.node2_id - iface1, iface2, options = add_link_data(link_proto) - args = (node1_id, node2_id, iface1.id, iface2.id, options) + iface1, iface2, options, link_type = add_link_data(link_proto) + args = (node1_id, node2_id, iface1.id, iface2.id, options, link_type) funcs.append((session.update_link, args, {})) start = time.monotonic() results, exceptions = utils.threadpool(funcs) total = time.monotonic() - start - logger.debug("grpc edit links time: %s", total) + logging.debug("grpc edit links time: %s", total) return results, exceptions @@ -236,26 +217,10 @@ def convert_value(value: Any) -> str: return value -def convert_session_options(session: Session) -> dict[str, common_pb2.ConfigOption]: - config_options = {} - for option in session.options.options: - value = session.options.get(option.id) - config_option = common_pb2.ConfigOption( - label=option.label, - name=option.id, - value=value, - type=option.type.value, - select=option.options, - group="Options", - ) - config_options[option.id] = config_option - return config_options - - def get_config_options( - config: dict[str, str], - configurable_options: Union[ConfigurableOptions, type[ConfigurableOptions]], -) -> dict[str, common_pb2.ConfigOption]: + config: Dict[str, str], + configurable_options: Union[ConfigurableOptions, Type[ConfigurableOptions]], +) -> Dict[str, common_pb2.ConfigOption]: """ Retrieve configuration options in a form that is used by the grpc server. @@ -283,15 +248,12 @@ def get_config_options( return results -def get_node_proto( - session: Session, node: NodeBase, emane_configs: list[NodeEmaneConfig] -) -> core_pb2.Node: +def get_node_proto(session: Session, node: NodeBase) -> core_pb2.Node: """ Convert CORE node to protobuf representation. :param session: session containing node :param node: node to convert - :param emane_configs: emane configs related to node :return: node proto """ node_type = session.get_node_type(node.__class__) @@ -301,77 +263,24 @@ def get_node_proto( geo = core_pb2.Geo( lat=node.position.lat, lon=node.position.lon, alt=node.position.alt ) - services = [x.name for x in node.services] - node_dir = None - config_services = [] - if isinstance(node, CoreNodeBase): - node_dir = str(node.directory) - config_services = [x for x in node.config_services] - channel = None - if isinstance(node, CoreNode): - channel = str(node.ctrlchnlname) + services = getattr(node, "services", []) + if services is None: + services = [] + services = [x.name for x in services] + config_services = getattr(node, "config_services", {}) + config_services = [x for x in config_services] emane_model = None if isinstance(node, EmaneNet): - emane_model = node.wireless_model.name - image = None - if isinstance(node, (DockerNode, LxcNode, PodmanNode)): - image = node.image - # check for wlan config - wlan_config = session.mobility.get_configs( - node.id, config_type=BasicRangeModel.name - ) - if wlan_config: - wlan_config = get_config_options(wlan_config, BasicRangeModel) - # check for wireless config - wireless_config = None - if isinstance(node, WirelessNode): - configs = node.get_config() - wireless_config = {} - for config in configs.values(): - config_option = common_pb2.ConfigOption( - label=config.label, - name=config.id, - value=config.default, - type=config.type.value, - select=config.options, - group=config.group, - ) - wireless_config[config.id] = config_option - # check for mobility config - mobility_config = session.mobility.get_configs( - node.id, config_type=Ns2ScriptedMobility.name - ) - if mobility_config: - mobility_config = get_config_options(mobility_config, Ns2ScriptedMobility) - # check for service configs - custom_services = session.services.custom_services.get(node.id) - service_configs = {} - if custom_services: - for service in custom_services.values(): - service_proto = get_service_configuration(service) - service_configs[service.name] = NodeServiceConfig( - node_id=node.id, - service=service.name, - data=service_proto, - files=service.config_data, - ) - # check for config service configs - config_service_configs = {} - if isinstance(node, CoreNode): - for service in node.config_services.values(): - if not service.custom_templates and not service.custom_config: - continue - config_service_configs[service.name] = ConfigServiceConfig( - node_id=node.id, - name=service.name, - templates=service.custom_templates, - config=service.custom_config, - ) + emane_model = node.model.name + model = getattr(node, "type", None) + node_dir = getattr(node, "nodedir", None) + channel = getattr(node, "ctrlchnlname", None) + image = getattr(node, "image", None) return core_pb2.Node( id=node.id, name=node.name, emane=emane_model, - model=node.model, + model=model, type=node_type.value, position=position, geo=geo, @@ -381,94 +290,64 @@ def get_node_proto( config_services=config_services, dir=node_dir, channel=channel, - canvas=node.canvas, - wlan_config=wlan_config, - wireless_config=wireless_config, - mobility_config=mobility_config, - service_configs=service_configs, - config_service_configs=config_service_configs, - emane_configs=emane_configs, ) -def get_links(session: Session, node: NodeBase) -> list[core_pb2.Link]: +def get_links(node: NodeBase): """ Retrieve a list of links for grpc to use. - :param session: session to get links for node :param node: node to get links from :return: protobuf links """ - link_protos = [] - for core_link in session.link_manager.node_links(node): - link_protos.extend(convert_core_link(core_link)) - if isinstance(node, (WlanNode, EmaneNet)): - for link_data in node.links(): - link_protos.append(convert_link_data(link_data)) - return link_protos - - -def convert_iface(iface: CoreInterface) -> core_pb2.Interface: - """ - Convert interface to protobuf. - - :param iface: interface to convert - :return: protobuf interface - """ - if isinstance(iface.node, CoreNetwork): - return core_pb2.Interface(id=iface.id) - else: - ip4 = iface.get_ip4() - ip4_mask = ip4.prefixlen if ip4 else None - ip4 = str(ip4.ip) if ip4 else None - ip6 = iface.get_ip6() - ip6_mask = ip6.prefixlen if ip6 else None - ip6 = str(ip6.ip) if ip6 else None - mac = str(iface.mac) if iface.mac else None - return core_pb2.Interface( - id=iface.id, - name=iface.name, - mac=mac, - ip4=ip4, - ip4_mask=ip4_mask, - ip6=ip6, - ip6_mask=ip6_mask, - ) - - -def convert_core_link(core_link: CoreLink) -> list[core_pb2.Link]: - """ - Convert core link to protobuf data. - - :param core_link: core link to convert - :return: protobuf link data - """ links = [] - node1, iface1 = core_link.node1, core_link.iface1 - node2, iface2 = core_link.node2, core_link.iface2 - unidirectional = core_link.is_unidirectional() - link = convert_link(node1, iface1, node2, iface2, iface1.options, unidirectional) - links.append(link) - if unidirectional: - link = convert_link( - node2, iface2, node1, iface1, iface2.options, unidirectional - ) - links.append(link) + for link in node.links(): + link_proto = convert_link(link) + links.append(link_proto) return links -def convert_link_data(link_data: LinkData) -> core_pb2.Link: +def convert_iface(iface_data: InterfaceData) -> core_pb2.Interface: + return core_pb2.Interface( + id=iface_data.id, + name=iface_data.name, + mac=iface_data.mac, + ip4=iface_data.ip4, + ip4_mask=iface_data.ip4_mask, + ip6=iface_data.ip6, + ip6_mask=iface_data.ip6_mask, + ) + + +def convert_link_options(options_data: LinkOptions) -> core_pb2.LinkOptions: + return core_pb2.LinkOptions( + jitter=options_data.jitter, + key=options_data.key, + mburst=options_data.mburst, + mer=options_data.mer, + loss=options_data.loss, + bandwidth=options_data.bandwidth, + burst=options_data.burst, + delay=options_data.delay, + dup=options_data.dup, + buffer=options_data.buffer, + unidirectional=options_data.unidirectional, + ) + + +def convert_link(link_data: LinkData) -> core_pb2.Link: """ Convert link_data into core protobuf link. + :param link_data: link to convert :return: core protobuf Link """ iface1 = None if link_data.iface1 is not None: - iface1 = convert_iface_data(link_data.iface1) + iface1 = convert_iface(link_data.iface1) iface2 = None if link_data.iface2 is not None: - iface2 = convert_iface_data(link_data.iface2) + iface2 = convert_iface(link_data.iface2) options = convert_link_options(link_data.options) return core_pb2.Link( type=link_data.type.value, @@ -483,132 +362,25 @@ def convert_link_data(link_data: LinkData) -> core_pb2.Link: ) -def convert_iface_data(iface_data: InterfaceData) -> core_pb2.Interface: - """ - Convert interface data to protobuf. - - :param iface_data: interface data to convert - :return: interface protobuf - """ - return core_pb2.Interface( - id=iface_data.id, - name=iface_data.name, - mac=iface_data.mac, - ip4=iface_data.ip4, - ip4_mask=iface_data.ip4_mask, - ip6=iface_data.ip6, - ip6_mask=iface_data.ip6_mask, - ) - - -def convert_link_options(options: LinkOptions) -> core_pb2.LinkOptions: - """ - Convert link options to protobuf. - - :param options: link options to convert - :return: link options protobuf - """ - return core_pb2.LinkOptions( - jitter=options.jitter, - key=options.key, - mburst=options.mburst, - mer=options.mer, - loss=options.loss, - bandwidth=options.bandwidth, - burst=options.burst, - delay=options.delay, - dup=options.dup, - buffer=options.buffer, - unidirectional=options.unidirectional, - ) - - -def convert_options_proto(options: core_pb2.LinkOptions) -> LinkOptions: - return LinkOptions( - delay=options.delay, - bandwidth=options.bandwidth, - loss=options.loss, - dup=options.dup, - jitter=options.jitter, - mer=options.mer, - burst=options.burst, - mburst=options.mburst, - buffer=options.buffer, - unidirectional=options.unidirectional, - key=options.key, - ) - - -def convert_link( - node1: NodeBase, - iface1: Optional[CoreInterface], - node2: NodeBase, - iface2: Optional[CoreInterface], - options: LinkOptions, - unidirectional: bool, -) -> core_pb2.Link: - """ - Convert link objects to link protobuf. - - :param node1: first node in link - :param iface1: node1 interface - :param node2: second node in link - :param iface2: node2 interface - :param options: link options - :param unidirectional: if this link is considered unidirectional - :return: protobuf link - """ - if iface1 is not None: - iface1 = convert_iface(iface1) - if iface2 is not None: - iface2 = convert_iface(iface2) - is_node1_wireless = isinstance(node1, (WlanNode, EmaneNet)) - is_node2_wireless = isinstance(node2, (WlanNode, EmaneNet)) - if not (is_node1_wireless or is_node2_wireless): - options = convert_link_options(options) - options.unidirectional = unidirectional - else: - options = None - return core_pb2.Link( - type=LinkTypes.WIRED.value, - node1_id=node1.id, - node2_id=node2.id, - iface1=iface1, - iface2=iface2, - options=options, - network_id=None, - label=None, - color=None, - ) - - -def parse_proc_net_dev(lines: list[str]) -> dict[str, dict[str, float]]: - """ - Parse lines of output from /proc/net/dev. - - :param lines: lines of /proc/net/dev - :return: parsed device to tx/rx values - """ - stats = {} - for line in lines[2:]: - line = line.strip() - if not line: - continue - line = line.split() - line[0] = line[0].strip(":") - stats[line[0]] = {"rx": float(line[1]), "tx": float(line[9])} - return stats - - -def get_net_stats() -> dict[str, dict[str, float]]: +def get_net_stats() -> Dict[str, Dict]: """ Retrieve status about the current interfaces in the system :return: send and receive status of the interfaces in the system """ with open("/proc/net/dev", "r") as f: - lines = f.readlines()[2:] - return parse_proc_net_dev(lines) + data = f.readlines()[2:] + + stats = {} + for line in data: + line = line.strip() + if not line: + continue + line = line.split() + line[0] = line[0].strip(":") + stats[line[0]] = {"rx": float(line[1]), "tx": float(line[9])} + + return stats def session_location(session: Session, location: core_pb2.SessionLocation) -> None: @@ -667,14 +439,39 @@ def get_service_configuration(service: CoreService) -> NodeServiceData: ) -def iface_to_proto(session: Session, iface: CoreInterface) -> core_pb2.Interface: +def iface_to_data(iface: CoreInterface) -> InterfaceData: + ip4 = iface.get_ip4() + ip4_addr = str(ip4.ip) if ip4 else None + ip4_mask = ip4.prefixlen if ip4 else None + ip6 = iface.get_ip6() + ip6_addr = str(ip6.ip) if ip6 else None + ip6_mask = ip6.prefixlen if ip6 else None + return InterfaceData( + id=iface.node_id, + name=iface.name, + mac=str(iface.mac), + ip4=ip4_addr, + ip4_mask=ip4_mask, + ip6=ip6_addr, + ip6_mask=ip6_mask, + ) + + +def iface_to_proto(node_id: int, iface: CoreInterface) -> core_pb2.Interface: """ Convenience for converting a core interface to the protobuf representation. - :param session: session interface belongs to + :param node_id: id of node to convert interface for :param iface: interface to convert :return: interface proto """ + if iface.node and iface.node.id == node_id: + _id = iface.node_id + else: + _id = iface.net_id + net_id = iface.net.id if iface.net else None + node_id = iface.node.id if iface.node else None + net2_id = iface.othernet.id if iface.othernet else None ip4_net = iface.get_ip4() ip4 = str(ip4_net.ip) if ip4_net else None ip4_mask = ip4_net.prefixlen if ip4_net else None @@ -682,13 +479,11 @@ def iface_to_proto(session: Session, iface: CoreInterface) -> core_pb2.Interface ip6 = str(ip6_net.ip) if ip6_net else None ip6_mask = ip6_net.prefixlen if ip6_net else None mac = str(iface.mac) if iface.mac else None - nem_id = None - nem_port = None - if isinstance(iface.net, EmaneNet): - nem_id = session.emane.get_nem_id(iface) - nem_port = session.emane.get_nem_port(iface) return core_pb2.Interface( - id=iface.id, + id=_id, + net_id=net_id, + net2_id=net2_id, + node_id=node_id, name=iface.name, mac=mac, mtu=iface.mtu, @@ -697,8 +492,6 @@ def iface_to_proto(session: Session, iface: CoreInterface) -> core_pb2.Interface ip4_mask=ip4_mask, ip6=ip6, ip6_mask=ip6_mask, - nem_id=nem_id, - nem_port=nem_port, ) @@ -729,36 +522,58 @@ def get_nem_id( return nem_id -def get_emane_model_configs_dict(session: Session) -> dict[int, list[NodeEmaneConfig]]: - """ - Get emane model configuration protobuf data. - - :param session: session to get emane model configuration for - :return: dict of emane model protobuf configurations - """ - configs = {} - for _id, model_configs in session.emane.node_configs.items(): +def get_emane_model_configs(session: Session) -> List[GetEmaneModelConfig]: + configs = [] + for _id in session.emane.node_configurations: + if _id == -1: + continue + model_configs = session.emane.node_configurations[_id] for model_name in model_configs: - model_class = session.emane.get_model(model_name) - current_config = session.emane.get_config(_id, model_name) - config = get_config_options(current_config, model_class) + model = session.emane.models[model_name] + current_config = session.emane.get_model_config(_id, model_name) + config = get_config_options(current_config, model) node_id, iface_id = utils.parse_iface_config_id(_id) iface_id = iface_id if iface_id is not None else -1 - node_config = NodeEmaneConfig( - model=model_name, iface_id=iface_id, config=config + model_config = GetEmaneModelConfig( + node_id=node_id, model=model_name, iface_id=iface_id, config=config ) - node_configs = configs.setdefault(node_id, []) - node_configs.append(node_config) + configs.append(model_config) return configs -def get_hooks(session: Session) -> list[core_pb2.Hook]: - """ - Retrieve hook protobuf data for a session. +def get_wlan_configs(session: Session) -> Dict[int, MappedConfig]: + configs = {} + for node_id in session.mobility.node_configurations: + model_config = session.mobility.node_configurations[node_id] + if node_id == -1: + continue + for model_name in model_config: + if model_name != BasicRangeModel.name: + continue + current_config = session.mobility.get_model_config(node_id, model_name) + config = get_config_options(current_config, BasicRangeModel) + mapped_config = MappedConfig(config=config) + configs[node_id] = mapped_config + return configs - :param session: session to get hooks for - :return: list of hook protobufs - """ + +def get_mobility_configs(session: Session) -> Dict[int, MappedConfig]: + configs = {} + for node_id in session.mobility.node_configurations: + model_config = session.mobility.node_configurations[node_id] + if node_id == -1: + continue + for model_name in model_config: + if model_name != Ns2ScriptedMobility.name: + continue + current_config = session.mobility.get_model_config(node_id, model_name) + config = get_config_options(current_config, Ns2ScriptedMobility) + mapped_config = MappedConfig(config=config) + configs[node_id] = mapped_config + return configs + + +def get_hooks(session: Session) -> List[core_pb2.Hook]: hooks = [] for state in session.hooks: state_hooks = session.hooks[state] @@ -768,31 +583,65 @@ def get_hooks(session: Session) -> list[core_pb2.Hook]: return hooks -def get_default_services(session: Session) -> list[ServiceDefaults]: - """ - Retrieve the default service sets for a given session. +def get_emane_models(session: Session) -> List[str]: + emane_models = [] + for model in session.emane.models.keys(): + if len(model.split("_")) != 2: + continue + emane_models.append(model) + return emane_models - :param session: session to get default service sets for - :return: list of default service sets - """ + +def get_default_services(session: Session) -> List[ServiceDefaults]: default_services = [] - for model, services in session.services.default_services.items(): - default_service = ServiceDefaults(model=model, services=services) + for name, services in session.services.default_services.items(): + default_service = ServiceDefaults(node_type=name, services=services) default_services.append(default_service) return default_services +def get_node_service_configs(session: Session) -> List[NodeServiceConfig]: + configs = [] + for node_id, service_configs in session.services.custom_services.items(): + for name in service_configs: + service = session.services.get_service(node_id, name) + service_proto = get_service_configuration(service) + config = NodeServiceConfig( + node_id=node_id, + service=name, + data=service_proto, + files=service.config_data, + ) + configs.append(config) + return configs + + +def get_node_config_service_configs(session: Session) -> List[ConfigServiceConfig]: + configs = [] + for node in session.nodes.values(): + if not isinstance(node, CoreNodeBase): + continue + for name, service in node.config_services.items(): + if not service.custom_templates and not service.custom_config: + continue + config_proto = ConfigServiceConfig( + node_id=node.id, + name=name, + templates=service.custom_templates, + config=service.custom_config, + ) + configs.append(config_proto) + return configs + + +def get_emane_config(session: Session) -> Dict[str, common_pb2.ConfigOption]: + current_config = session.emane.get_configs() + return get_config_options(current_config, session.emane.emane_config) + + def get_mobility_node( session: Session, node_id: int, context: ServicerContext ) -> Union[WlanNode, EmaneNet]: - """ - Get mobility node. - - :param session: session to get node from - :param node_id: id of node to get - :param context: grpc context - :return: wlan or emane node - """ try: return session.get_node(node_id, WlanNode) except CoreError: @@ -800,109 +649,3 @@ def get_mobility_node( return session.get_node(node_id, EmaneNet) except CoreError: context.abort(grpc.StatusCode.NOT_FOUND, "node id is not for wlan or emane") - - -def convert_session(session: Session) -> wrappers.Session: - """ - Convert session to its wrapped version. - - :param session: session to convert - :return: wrapped session data - """ - emane_configs = get_emane_model_configs_dict(session) - nodes = [] - links = [] - for _id in session.nodes: - node = session.nodes[_id] - if not isinstance(node, (PtpNet, CtrlNet)): - node_emane_configs = emane_configs.get(node.id, []) - node_proto = get_node_proto(session, node, node_emane_configs) - nodes.append(node_proto) - if isinstance(node, (WlanNode, EmaneNet)): - for link_data in node.links(): - links.append(convert_link_data(link_data)) - for core_link in session.link_manager.links(): - links.extend(convert_core_link(core_link)) - default_services = get_default_services(session) - x, y, z = session.location.refxyz - lat, lon, alt = session.location.refgeo - location = core_pb2.SessionLocation( - x=x, y=y, z=z, lat=lat, lon=lon, alt=alt, scale=session.location.refscale - ) - hooks = get_hooks(session) - session_file = str(session.file_path) if session.file_path else None - options = convert_session_options(session) - servers = [ - core_pb2.Server(name=x.name, host=x.host) - for x in session.distributed.servers.values() - ] - return core_pb2.Session( - id=session.id, - state=session.state.value, - nodes=nodes, - links=links, - dir=str(session.directory), - user=session.user, - default_services=default_services, - location=location, - hooks=hooks, - metadata=session.metadata, - file=session_file, - options=options, - servers=servers, - ) - - -def configure_node( - session: Session, node: core_pb2.Node, core_node: NodeBase, context: ServicerContext -) -> None: - """ - Configure a node using all provided protobuf data. - - :param session: session for node - :param node: node protobuf data - :param core_node: session node - :param context: grpc context - :return: nothing - """ - for emane_config in node.emane_configs: - _id = utils.iface_config_id(node.id, emane_config.iface_id) - config = {k: v.value for k, v in emane_config.config.items()} - session.emane.set_config(_id, emane_config.model, config) - if node.wlan_config: - config = {k: v.value for k, v in node.wlan_config.items()} - session.mobility.set_model_config(node.id, BasicRangeModel.name, config) - if node.mobility_config: - config = {k: v.value for k, v in node.mobility_config.items()} - session.mobility.set_model_config(node.id, Ns2ScriptedMobility.name, config) - if isinstance(core_node, WirelessNode) and node.wireless_config: - config = {k: v.value for k, v in node.wireless_config.items()} - core_node.set_config(config) - for service_name, service_config in node.service_configs.items(): - data = service_config.data - config = ServiceConfig( - node_id=node.id, - service=service_name, - startup=data.startup, - validate=data.validate, - shutdown=data.shutdown, - files=data.configs, - directories=data.dirs, - ) - service_configuration(session, config) - for file_name, file_data in service_config.files.items(): - session.services.set_service_file( - node.id, service_name, file_name, file_data - ) - if node.config_service_configs: - if not isinstance(core_node, CoreNode): - context.abort( - grpc.StatusCode.INVALID_ARGUMENT, - "invalid node type with config service configs", - ) - for service_name, service_config in node.config_service_configs.items(): - service = core_node.config_services[service_name] - if service_config.config: - service.set_config(service_config.config) - for name, template in service_config.templates.items(): - service.set_template(name, template) diff --git a/daemon/core/api/grpc/server.py b/daemon/core/api/grpc/server.py index 6a86ab0a..73fa2fa6 100644 --- a/daemon/core/api/grpc/server.py +++ b/daemon/core/api/grpc/server.py @@ -1,15 +1,12 @@ +import atexit import logging import os import re -import signal -import sys import tempfile +import threading import time -from collections.abc import Iterable from concurrent import futures -from pathlib import Path -from re import Pattern -from typing import Optional +from typing import Iterable, Optional, Pattern, Type import grpc from grpc import ServicerContext @@ -26,31 +23,35 @@ from core.api.grpc.configservices_pb2 import ( ConfigService, GetConfigServiceDefaultsRequest, GetConfigServiceDefaultsResponse, - GetConfigServiceRenderedRequest, - GetConfigServiceRenderedResponse, + GetConfigServicesRequest, + GetConfigServicesResponse, + GetNodeConfigServiceConfigsRequest, + GetNodeConfigServiceConfigsResponse, GetNodeConfigServiceRequest, GetNodeConfigServiceResponse, + GetNodeConfigServicesRequest, + GetNodeConfigServicesResponse, + SetNodeConfigServiceRequest, + SetNodeConfigServiceResponse, ) -from core.api.grpc.core_pb2 import ( - ExecuteScriptResponse, - GetWirelessConfigRequest, - GetWirelessConfigResponse, - LinkedRequest, - LinkedResponse, - WirelessConfigRequest, - WirelessConfigResponse, - WirelessLinkedRequest, - WirelessLinkedResponse, -) +from core.api.grpc.core_pb2 import ExecuteScriptResponse from core.api.grpc.emane_pb2 import ( EmaneLinkRequest, EmaneLinkResponse, EmanePathlossesRequest, EmanePathlossesResponse, + GetEmaneConfigRequest, + GetEmaneConfigResponse, GetEmaneEventChannelRequest, GetEmaneEventChannelResponse, GetEmaneModelConfigRequest, GetEmaneModelConfigResponse, + GetEmaneModelConfigsRequest, + GetEmaneModelConfigsResponse, + GetEmaneModelsRequest, + GetEmaneModelsResponse, + SetEmaneConfigRequest, + SetEmaneConfigResponse, SetEmaneModelConfigRequest, SetEmaneModelConfigResponse, ) @@ -59,6 +60,8 @@ from core.api.grpc.grpcutils import get_config_options, get_links, get_net_stats from core.api.grpc.mobility_pb2 import ( GetMobilityConfigRequest, GetMobilityConfigResponse, + GetMobilityConfigsRequest, + GetMobilityConfigsResponse, MobilityAction, MobilityActionRequest, MobilityActionResponse, @@ -66,48 +69,54 @@ from core.api.grpc.mobility_pb2 import ( SetMobilityConfigResponse, ) from core.api.grpc.services_pb2 import ( + GetNodeServiceConfigsRequest, + GetNodeServiceConfigsResponse, GetNodeServiceFileRequest, GetNodeServiceFileResponse, GetNodeServiceRequest, GetNodeServiceResponse, GetServiceDefaultsRequest, GetServiceDefaultsResponse, + GetServicesRequest, + GetServicesResponse, Service, ServiceAction, ServiceActionRequest, ServiceActionResponse, + SetNodeServiceFileRequest, + SetNodeServiceFileResponse, + SetNodeServiceRequest, + SetNodeServiceResponse, SetServiceDefaultsRequest, SetServiceDefaultsResponse, ) from core.api.grpc.wlan_pb2 import ( GetWlanConfigRequest, GetWlanConfigResponse, + GetWlanConfigsRequest, + GetWlanConfigsResponse, SetWlanConfigRequest, SetWlanConfigResponse, WlanLinkRequest, WlanLinkResponse, ) -from core.configservice.base import ConfigServiceBootError -from core.emane.modelmanager import EmaneModelManager from core.emulator.coreemu import CoreEmu -from core.emulator.data import InterfaceData, LinkData, LinkOptions +from core.emulator.data import InterfaceData, LinkData, LinkOptions, NodeOptions from core.emulator.enumerations import ( EventTypes, ExceptionLevels, + LinkTypes, MessageFlags, - NodeTypes, ) from core.emulator.session import NT, Session from core.errors import CoreCommandError, CoreError from core.location.mobility import BasicRangeModel, Ns2ScriptedMobility from core.nodes.base import CoreNode, NodeBase -from core.nodes.network import CoreNetwork, WlanNode -from core.nodes.wireless import WirelessNode +from core.nodes.network import CtrlNet, PtpNet, WlanNode from core.services.coreservices import ServiceManager -logger = logging.getLogger(__name__) _ONE_DAY_IN_SECONDS: int = 60 * 60 * 24 -_INTERFACE_REGEX: Pattern[str] = re.compile(r"beth(?P[0-9a-fA-F]+)") +_INTERFACE_REGEX: Pattern = re.compile(r"veth(?P[0-9a-fA-F]+)") _MAX_WORKERS = 1000 @@ -123,20 +132,11 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): self.coreemu: CoreEmu = coreemu self.running: bool = True self.server: Optional[grpc.Server] = None - # catch signals - signal.signal(signal.SIGHUP, self._signal_handler) - signal.signal(signal.SIGINT, self._signal_handler) - signal.signal(signal.SIGTERM, self._signal_handler) - signal.signal(signal.SIGUSR1, self._signal_handler) - signal.signal(signal.SIGUSR2, self._signal_handler) + atexit.register(self._exit_handler) - def _signal_handler(self, signal_number: int, _) -> None: - logger.info("caught signal: %s", signal_number) - self.coreemu.shutdown() + def _exit_handler(self) -> None: + logging.debug("catching exit, stop running") self.running = False - if self.server: - self.server.stop(None) - sys.exit(signal_number) def _is_running(self, context) -> bool: return self.running and context.is_active() @@ -145,7 +145,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): context.abort(grpc.StatusCode.CANCELLED, "server stopping") def listen(self, address: str) -> None: - logger.info("CORE gRPC API listening on: %s", address) + logging.info("CORE gRPC API listening on: %s", address) self.server = grpc.server(futures.ThreadPoolExecutor(max_workers=_MAX_WORKERS)) core_pb2_grpc.add_CoreApiServicer_to_server(self, self.server) self.server.add_insecure_port(address) @@ -173,7 +173,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): return session def get_node( - self, session: Session, node_id: int, context: ServicerContext, _class: type[NT] + self, session: Session, node_id: int, context: ServicerContext, _class: Type[NT] ) -> NT: """ Retrieve node given session and node id @@ -190,29 +190,9 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): except CoreError as e: context.abort(grpc.StatusCode.NOT_FOUND, str(e)) - def move_node( - self, - context: ServicerContext, - session_id: int, - node_id: int, - geo: core_pb2.Geo = None, - position: core_pb2.Position = None, - source: str = None, - ): - if not geo and not position: - raise CoreError("move node must provide a geo or position to move") - session = self.get_session(session_id, context) - node = self.get_node(session, node_id, context, NodeBase) - if geo: - session.set_node_geo(node, geo.lon, geo.lat, geo.alt) - else: - session.set_node_pos(node, position.x, position.y) - source = source if source else None - session.broadcast_node(node, source=source) - def validate_service( self, name: str, context: ServicerContext - ) -> type[ConfigService]: + ) -> Type[ConfigService]: """ Validates a configuration service is a valid known service. @@ -226,38 +206,6 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): context.abort(grpc.StatusCode.NOT_FOUND, f"unknown service {name}") return service - def GetConfig( - self, request: core_pb2.GetConfigRequest, context: ServicerContext - ) -> core_pb2.GetConfigResponse: - services = [] - for name in ServiceManager.services: - service = ServiceManager.services[name] - service_proto = Service(group=service.group, name=service.name) - services.append(service_proto) - config_services = [] - for service in self.coreemu.service_manager.services.values(): - service_proto = ConfigService( - name=service.name, - group=service.group, - executables=service.executables, - dependencies=service.dependencies, - directories=service.directories, - files=service.files, - startup=service.startup, - validate=service.validate, - shutdown=service.shutdown, - validation_mode=service.validation_mode.value, - validation_timer=service.validation_timer, - validation_period=service.validation_period, - ) - config_services.append(service_proto) - emane_models = [x.name for x in EmaneModelManager.models.values()] - return core_pb2.GetConfigResponse( - services=services, - config_services=config_services, - emane_models=emane_models, - ) - def StartSession( self, request: core_pb2.StartSessionRequest, context: ServicerContext ) -> core_pb2.StartSessionResponse: @@ -268,88 +216,91 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: grpc context :return: start session response """ - logger.debug("start session: %s", request) - session = self.get_session(request.session.id, context) + logging.debug("start session: %s", request) + session = self.get_session(request.session_id, context) # clear previous state and setup for creation session.clear() - session.directory.mkdir(exist_ok=True) - if request.definition: - state = EventTypes.DEFINITION_STATE - else: - state = EventTypes.CONFIGURATION_STATE - session.set_state(state) - if request.session.user: - session.set_user(request.session.user) - - # session options - for option in request.session.options.values(): - if option.value: - session.options.set(option.name, option.value) - session.metadata = dict(request.session.metadata) - - # add servers - for server in request.session.servers: - session.distributed.add_server(server.name, server.host) + if not os.path.exists(session.session_dir): + os.mkdir(session.session_dir) + session.set_state(EventTypes.CONFIGURATION_STATE) # location - if request.session.HasField("location"): - grpcutils.session_location(session, request.session.location) + if request.HasField("location"): + grpcutils.session_location(session, request.location) # add all hooks - for hook in request.session.hooks: + for hook in request.hooks: state = EventTypes(hook.state) session.add_hook(state, hook.file, hook.data) # create nodes - _, exceptions = grpcutils.create_nodes(session, request.session.nodes) + _, exceptions = grpcutils.create_nodes(session, request.nodes) if exceptions: exceptions = [str(x) for x in exceptions] return core_pb2.StartSessionResponse(result=False, exceptions=exceptions) - # check for configurations - for node in request.session.nodes: - core_node = self.get_node(session, node.id, context, NodeBase) - grpcutils.configure_node(session, node, core_node, context) + # emane configs + config = session.emane.get_configs() + config.update(request.emane_config) + for config in request.emane_model_configs: + _id = utils.iface_config_id(config.node_id, config.iface_id) + session.emane.set_model_config(_id, config.model, config.config) + + # wlan configs + for config in request.wlan_configs: + session.mobility.set_model_config( + config.node_id, BasicRangeModel.name, config.config + ) + + # mobility configs + for config in request.mobility_configs: + session.mobility.set_model_config( + config.node_id, Ns2ScriptedMobility.name, config.config + ) + + # service configs + for config in request.service_configs: + grpcutils.service_configuration(session, config) + + # config service configs + for config in request.config_service_configs: + node = self.get_node(session, config.node_id, context, CoreNode) + service = node.config_services[config.name] + if config.config: + service.set_config(config.config) + for name, template in config.templates.items(): + service.set_template(name, template) + + # service file configs + for config in request.service_file_configs: + session.services.set_service_file( + config.node_id, config.service, config.file, config.data + ) # create links - links = [] - edit_links = [] - known_links = set() - for link in request.session.links: - iface1 = link.iface1.id if link.iface1 else None - iface2 = link.iface2.id if link.iface2 else None - if link.node1_id < link.node2_id: - link_id = (link.node1_id, iface1, link.node2_id, iface2) - else: - link_id = (link.node2_id, iface2, link.node1_id, iface1) - if link_id in known_links: - edit_links.append(link) - else: - known_links.add(link_id) - links.append(link) - _, exceptions = grpcutils.create_links(session, links) + _, exceptions = grpcutils.create_links(session, request.links) if exceptions: exceptions = [str(x) for x in exceptions] return core_pb2.StartSessionResponse(result=False, exceptions=exceptions) - _, exceptions = grpcutils.edit_links(session, edit_links) + + # asymmetric links + _, exceptions = grpcutils.edit_links(session, request.asymmetric_links) if exceptions: exceptions = [str(x) for x in exceptions] return core_pb2.StartSessionResponse(result=False, exceptions=exceptions) # set to instantiation and start - if not request.definition: - session.set_state(EventTypes.INSTANTIATION_STATE) - # boot services - boot_exceptions = session.instantiate() - if boot_exceptions: - exceptions = [] - for boot_exception in boot_exceptions: - for service_exception in boot_exception.args: - exceptions.append(str(service_exception)) - return core_pb2.StartSessionResponse( - result=False, exceptions=exceptions - ) + session.set_state(EventTypes.INSTANTIATION_STATE) + + # boot services + boot_exceptions = session.instantiate() + if boot_exceptions: + exceptions = [] + for boot_exception in boot_exceptions: + for service_exception in boot_exception.args: + exceptions.append(str(service_exception)) + return core_pb2.StartSessionResponse(result=False, exceptions=exceptions) return core_pb2.StartSessionResponse(result=True) def StopSession( @@ -362,7 +313,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: grpc context :return: stop session response """ - logger.debug("stop session: %s", request) + logging.debug("stop session: %s", request) session = self.get_session(request.session_id, context) session.data_collect() session.shutdown() @@ -378,13 +329,14 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: :return: a create-session response """ - logger.debug("create session: %s", request) + logging.debug("create session: %s", request) session = self.coreemu.create_session(request.session_id) session.set_state(EventTypes.DEFINITION_STATE) session.location.setrefgeo(47.57917, -122.13232, 2.0) session.location.refscale = 150.0 - session_proto = grpcutils.convert_session(session) - return core_pb2.CreateSessionResponse(session=session_proto) + return core_pb2.CreateSessionResponse( + session_id=session.id, state=session.state.value + ) def DeleteSession( self, request: core_pb2.DeleteSessionRequest, context: ServicerContext @@ -396,7 +348,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: a delete-session response """ - logger.debug("delete session: %s", request) + logging.debug("delete session: %s", request) result = self.coreemu.delete_session(request.session_id) return core_pb2.DeleteSessionResponse(result=result) @@ -404,27 +356,175 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): self, request: core_pb2.GetSessionsRequest, context: ServicerContext ) -> core_pb2.GetSessionsResponse: """ - Get all currently known session overviews. + Delete the session - :param request: get sessions request + :param request: get-session request :param context: context object - :return: a get sessions response + :return: a delete-session response """ - logger.debug("get sessions: %s", request) + logging.debug("get sessions: %s", request) sessions = [] for session_id in self.coreemu.sessions: session = self.coreemu.sessions[session_id] - session_file = str(session.file_path) if session.file_path else None session_summary = core_pb2.SessionSummary( id=session_id, state=session.state.value, nodes=session.get_node_count(), - file=session_file, - dir=str(session.directory), + file=session.file_name, + dir=session.session_dir, ) sessions.append(session_summary) return core_pb2.GetSessionsResponse(sessions=sessions) + def GetSessionLocation( + self, request: core_pb2.GetSessionLocationRequest, context: ServicerContext + ) -> core_pb2.GetSessionLocationResponse: + """ + Retrieve a requested session location + + :param request: get-session-location request + :param context: context object + :return: a get-session-location response + """ + logging.debug("get session location: %s", request) + session = self.get_session(request.session_id, context) + x, y, z = session.location.refxyz + lat, lon, alt = session.location.refgeo + scale = session.location.refscale + location = core_pb2.SessionLocation( + x=x, y=y, z=z, lat=lat, lon=lon, alt=alt, scale=scale + ) + return core_pb2.GetSessionLocationResponse(location=location) + + def SetSessionLocation( + self, request: core_pb2.SetSessionLocationRequest, context: ServicerContext + ) -> core_pb2.SetSessionLocationResponse: + """ + Set session location + + :param request: set-session-location request + :param context: context object + :return: a set-session-location-response + """ + logging.debug("set session location: %s", request) + session = self.get_session(request.session_id, context) + grpcutils.session_location(session, request.location) + return core_pb2.SetSessionLocationResponse(result=True) + + def SetSessionState( + self, request: core_pb2.SetSessionStateRequest, context: ServicerContext + ) -> core_pb2.SetSessionStateResponse: + """ + Set session state + + :param request: set-session-state request + :param context:context object + :return: set-session-state response + """ + logging.debug("set session state: %s", request) + session = self.get_session(request.session_id, context) + + try: + state = EventTypes(request.state) + session.set_state(state) + + if state == EventTypes.INSTANTIATION_STATE: + if not os.path.exists(session.session_dir): + os.mkdir(session.session_dir) + session.instantiate() + elif state == EventTypes.SHUTDOWN_STATE: + session.shutdown() + elif state == EventTypes.DATACOLLECT_STATE: + session.data_collect() + elif state == EventTypes.DEFINITION_STATE: + session.clear() + + result = True + except KeyError: + result = False + + return core_pb2.SetSessionStateResponse(result=result) + + def SetSessionUser( + self, request: core_pb2.SetSessionUserRequest, context: ServicerContext + ) -> core_pb2.SetSessionUserResponse: + """ + Sets the user for a session. + + :param request: set session user request + :param context: context object + :return: set session user response + """ + logging.debug("set session user: %s", request) + session = self.get_session(request.session_id, context) + session.user = request.user + return core_pb2.SetSessionUserResponse(result=True) + + def GetSessionOptions( + self, request: core_pb2.GetSessionOptionsRequest, context: ServicerContext + ) -> core_pb2.GetSessionOptionsResponse: + """ + Retrieve session options. + + :param request: + get-session-options request + :param context: context object + :return: get-session-options response about all session's options + """ + logging.debug("get session options: %s", request) + session = self.get_session(request.session_id, context) + current_config = session.options.get_configs() + default_config = session.options.default_values() + default_config.update(current_config) + config = get_config_options(default_config, session.options) + return core_pb2.GetSessionOptionsResponse(config=config) + + def SetSessionOptions( + self, request: core_pb2.SetSessionOptionsRequest, context: ServicerContext + ) -> core_pb2.SetSessionOptionsResponse: + """ + Update a session's configuration + + :param request: set-session-options request + :param context: context object + :return: set-session-options response + """ + logging.debug("set session options: %s", request) + session = self.get_session(request.session_id, context) + config = session.options.get_configs() + config.update(request.config) + return core_pb2.SetSessionOptionsResponse(result=True) + + def GetSessionMetadata( + self, request: core_pb2.GetSessionMetadataRequest, context: ServicerContext + ) -> core_pb2.GetSessionMetadataResponse: + """ + Retrieve session metadata. + + :param request: get session metadata + request + :param context: context object + :return: get session metadata response + """ + logging.debug("get session metadata: %s", request) + session = self.get_session(request.session_id, context) + return core_pb2.GetSessionMetadataResponse(config=session.metadata) + + def SetSessionMetadata( + self, request: core_pb2.SetSessionMetadataRequest, context: ServicerContext + ) -> core_pb2.SetSessionMetadataResponse: + """ + Update a session's metadata. + + :param request: set metadata request + :param context: context object + :return: set metadata response + """ + logging.debug("set session metadata: %s", request) + session = self.get_session(request.session_id, context) + session.metadata = dict(request.config) + return core_pb2.SetSessionMetadataResponse(result=True) + def CheckSession( self, request: core_pb2.GetSessionRequest, context: ServicerContext ) -> core_pb2.CheckSessionResponse: @@ -448,11 +548,68 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-session response """ - logger.debug("get session: %s", request) + logging.debug("get session: %s", request) session = self.get_session(request.session_id, context) - session_proto = grpcutils.convert_session(session) + links = [] + nodes = [] + for _id in session.nodes: + node = session.nodes[_id] + if not isinstance(node, (PtpNet, CtrlNet)): + node_proto = grpcutils.get_node_proto(session, node) + nodes.append(node_proto) + node_links = get_links(node) + links.extend(node_links) + default_services = grpcutils.get_default_services(session) + x, y, z = session.location.refxyz + lat, lon, alt = session.location.refgeo + location = core_pb2.SessionLocation( + x=x, y=y, z=z, lat=lat, lon=lon, alt=alt, scale=session.location.refscale + ) + hooks = grpcutils.get_hooks(session) + emane_models = grpcutils.get_emane_models(session) + emane_config = grpcutils.get_emane_config(session) + emane_model_configs = grpcutils.get_emane_model_configs(session) + wlan_configs = grpcutils.get_wlan_configs(session) + mobility_configs = grpcutils.get_mobility_configs(session) + service_configs = grpcutils.get_node_service_configs(session) + config_service_configs = grpcutils.get_node_config_service_configs(session) + session_proto = core_pb2.Session( + id=session.id, + state=session.state.value, + nodes=nodes, + links=links, + dir=session.session_dir, + user=session.user, + default_services=default_services, + location=location, + hooks=hooks, + emane_models=emane_models, + emane_config=emane_config, + emane_model_configs=emane_model_configs, + wlan_configs=wlan_configs, + service_configs=service_configs, + config_service_configs=config_service_configs, + mobility_configs=mobility_configs, + metadata=session.metadata, + file=session.file_name, + ) return core_pb2.GetSessionResponse(session=session_proto) + def AddSessionServer( + self, request: core_pb2.AddSessionServerRequest, context: ServicerContext + ) -> core_pb2.AddSessionServerResponse: + """ + Add distributed server to a session. + + :param request: get-session + request + :param context: context object + :return: add session server response + """ + session = self.get_session(request.session_id, context) + session.distributed.add_server(request.name, request.host) + return core_pb2.AddSessionServerResponse(result=True) + def SessionAlert( self, request: core_pb2.SessionAlertRequest, context: ServicerContext ) -> core_pb2.SessionAlertResponse: @@ -495,6 +652,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): while self._is_running(context): now = time.monotonic() stats = get_net_stats() + # calculate average if last_check is not None: interval = now - last_check @@ -511,7 +669,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): (current_rxtx["tx"] - previous_rxtx["tx"]) * 8.0 / interval ) throughput = rx_kbps + tx_kbps - if key.startswith("beth"): + if key.startswith("veth"): key = key.split(".") node_id = _INTERFACE_REGEX.search(key[0]).group("node") node_id = int(node_id, base=16) @@ -537,6 +695,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): bridge_throughput.throughput = throughput except ValueError: pass + yield throughputs_event last_check = now @@ -562,20 +721,11 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: add-node response """ - logger.debug("add node: %s", request) + logging.debug("add node: %s", request) session = self.get_session(request.session_id, context) - _type = NodeTypes(request.node.type) + _type, _id, options = grpcutils.add_node_data(request.node) _class = session.get_node_class(_type) - position, options = grpcutils.add_node_data(_class, request.node) - node = session.add_node( - _class, - request.node.id or None, - request.node.name or None, - request.node.server or None, - position, - options, - ) - grpcutils.configure_node(session, request.node, node, context) + node = session.add_node(_class, _id, options) source = request.source if request.source else None session.broadcast_node(node, MessageFlags.ADD, source) return core_pb2.AddNodeResponse(node_id=node.id) @@ -590,36 +740,16 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-node response """ - logger.debug("get node: %s", request) + logging.debug("get node: %s", request) session = self.get_session(request.session_id, context) node = self.get_node(session, request.node_id, context, NodeBase) ifaces = [] for iface_id in node.ifaces: iface = node.ifaces[iface_id] - iface_proto = grpcutils.iface_to_proto(session, iface) + iface_proto = grpcutils.iface_to_proto(request.node_id, iface) ifaces.append(iface_proto) - emane_configs = grpcutils.get_emane_model_configs_dict(session) - node_emane_configs = emane_configs.get(node.id, []) - node_proto = grpcutils.get_node_proto(session, node, node_emane_configs) - links = get_links(session, node) - return core_pb2.GetNodeResponse(node=node_proto, ifaces=ifaces, links=links) - - def MoveNode( - self, request: core_pb2.MoveNodeRequest, context: ServicerContext - ) -> core_pb2.MoveNodeResponse: - """ - Move node, either by x,y position or geospatial. - - :param request: move node request - :param context: context object - :return: move nodes response - """ - geo = request.geo if request.HasField("geo") else None - position = request.position if request.HasField("position") else None - self.move_node( - context, request.session_id, request.node_id, geo, position, request.source - ) - return core_pb2.MoveNodeResponse(result=True) + node_proto = grpcutils.get_node_proto(session, node) + return core_pb2.GetNodeResponse(node=node_proto, ifaces=ifaces) def MoveNodes( self, @@ -634,16 +764,27 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :return: move nodes response """ for request in request_iterator: - geo = request.geo if request.HasField("geo") else None - position = request.position if request.HasField("position") else None - self.move_node( - context, - request.session_id, - request.node_id, - geo, - position, - request.source, - ) + if not request.WhichOneof("move_type"): + raise CoreError("move nodes must provide a move type") + session = self.get_session(request.session_id, context) + node = self.get_node(session, request.node_id, context, NodeBase) + options = NodeOptions() + has_geo = request.HasField("geo") + if has_geo: + logging.info("has geo") + lat = request.geo.lat + lon = request.geo.lon + alt = request.geo.alt + options.set_location(lat, lon, alt) + else: + x = request.position.x + y = request.position.y + logging.info("has pos: %s,%s", x, y) + options.set_position(x, y) + session.edit_node(node.id, options) + source = request.source if request.source else None + if not has_geo: + session.broadcast_node(node, source=source) return core_pb2.MoveNodesResponse() def EditNode( @@ -656,13 +797,31 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: edit-node response """ - logger.debug("edit node: %s", request) + logging.debug("edit node: %s", request) session = self.get_session(request.session_id, context) node = self.get_node(session, request.node_id, context, NodeBase) - node.icon = request.icon or None - source = request.source or None - session.broadcast_node(node, source=source) - return core_pb2.EditNodeResponse(result=True) + options = NodeOptions(icon=request.icon) + if request.HasField("position"): + x = request.position.x + y = request.position.y + options.set_position(x, y) + has_geo = request.HasField("geo") + if has_geo: + lat = request.geo.lat + lon = request.geo.lon + alt = request.geo.alt + options.set_location(lat, lon, alt) + result = True + try: + session.edit_node(node.id, options) + source = None + if request.source: + source = request.source + if not has_geo: + session.broadcast_node(node, source=source) + except CoreError: + result = False + return core_pb2.EditNodeResponse(result=result) def DeleteNode( self, request: core_pb2.DeleteNodeRequest, context: ServicerContext @@ -674,7 +833,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: core.api.grpc.core_pb2.DeleteNodeResponse """ - logger.debug("delete node: %s", request) + logging.debug("delete node: %s", request) session = self.get_session(request.session_id, context) result = False if request.node_id in session.nodes: @@ -694,7 +853,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: core.api.grpc.core_pb2.NodeCommandResponse """ - logger.debug("sending node command: %s", request) + logging.debug("sending node command: %s", request) session = self.get_session(request.session_id, context) node = self.get_node(session, request.node_id, context, CoreNode) try: @@ -715,12 +874,28 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-node-terminal response """ - logger.debug("getting node terminal: %s", request) + logging.debug("getting node terminal: %s", request) session = self.get_session(request.session_id, context) node = self.get_node(session, request.node_id, context, CoreNode) terminal = node.termcmdstring("/bin/bash") return core_pb2.GetNodeTerminalResponse(terminal=terminal) + def GetNodeLinks( + self, request: core_pb2.GetNodeLinksRequest, context: ServicerContext + ) -> core_pb2.GetNodeLinksResponse: + """ + Retrieve all links form a requested node + + :param request: get-node-links request + :param context: context object + :return: get-node-links response + """ + logging.debug("get node links: %s", request) + session = self.get_session(request.session_id, context) + node = self.get_node(session, request.node_id, context, NodeBase) + links = get_links(node) + return core_pb2.GetNodeLinksResponse(links=links) + def AddLink( self, request: core_pb2.AddLinkRequest, context: ServicerContext ) -> core_pb2.AddLinkResponse: @@ -731,28 +906,24 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: add-link response """ - logger.debug("add link: %s", request) + logging.debug("add link: %s", request) session = self.get_session(request.session_id, context) node1_id = request.link.node1_id node2_id = request.link.node2_id self.get_node(session, node1_id, context, NodeBase) self.get_node(session, node2_id, context, NodeBase) - iface1_data, iface2_data, options = grpcutils.add_link_data(request.link) + iface1_data, iface2_data, options, link_type = grpcutils.add_link_data( + request.link + ) node1_iface, node2_iface = session.add_link( - node1_id, node2_id, iface1_data, iface2_data, options + node1_id, node2_id, iface1_data, iface2_data, options, link_type ) iface1_data = None if node1_iface: - if isinstance(node1_iface.node, CoreNetwork): - iface1_data = InterfaceData(id=node1_iface.id) - else: - iface1_data = node1_iface.get_data() + iface1_data = grpcutils.iface_to_data(node1_iface) iface2_data = None if node2_iface: - if isinstance(node2_iface.node, CoreNetwork): - iface2_data = InterfaceData(id=node2_iface.id) - else: - iface2_data = node2_iface.get_data() + iface2_data = grpcutils.iface_to_data(node2_iface) source = request.source if request.source else None link_data = LinkData( message_type=MessageFlags.ADD, @@ -767,9 +938,9 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): iface1_proto = None iface2_proto = None if node1_iface: - iface1_proto = grpcutils.iface_to_proto(session, node1_iface) + iface1_proto = grpcutils.iface_to_proto(node1_id, node1_iface) if node2_iface: - iface2_proto = grpcutils.iface_to_proto(session, node2_iface) + iface2_proto = grpcutils.iface_to_proto(node2_id, node2_iface) return core_pb2.AddLinkResponse( result=True, iface1=iface1_proto, iface2=iface2_proto ) @@ -784,7 +955,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: edit-link response """ - logger.debug("edit link: %s", request) + logging.debug("edit link: %s", request) session = self.get_session(request.session_id, context) node1_id = request.node1_id node2_id = request.node2_id @@ -830,7 +1001,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: delete-link response """ - logger.debug("delete link: %s", request) + logging.debug("delete link: %s", request) session = self.get_session(request.session_id, context) node1_id = request.node1_id node2_id = request.node2_id @@ -851,6 +1022,54 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): session.broadcast_link(link_data) return core_pb2.DeleteLinkResponse(result=True) + def GetHooks( + self, request: core_pb2.GetHooksRequest, context: ServicerContext + ) -> core_pb2.GetHooksResponse: + """ + Retrieve all hooks from a session + + :param request: get-hook request + :param context: context object + :return: get-hooks response about all the hooks in all session states + """ + logging.debug("get hooks: %s", request) + session = self.get_session(request.session_id, context) + hooks = grpcutils.get_hooks(session) + return core_pb2.GetHooksResponse(hooks=hooks) + + def AddHook( + self, request: core_pb2.AddHookRequest, context: ServicerContext + ) -> core_pb2.AddHookResponse: + """ + Add hook to a session + + :param request: add-hook request + :param context: context object + :return: add-hook response + """ + logging.debug("add hook: %s", request) + session = self.get_session(request.session_id, context) + hook = request.hook + state = EventTypes(hook.state) + session.add_hook(state, hook.file, hook.data) + return core_pb2.AddHookResponse(result=True) + + def GetMobilityConfigs( + self, request: GetMobilityConfigsRequest, context: ServicerContext + ) -> GetMobilityConfigsResponse: + """ + Retrieve all mobility configurations from a session + + :param request: + get-mobility-configurations request + :param context: context object + :return: get-mobility-configurations response that has a list of configurations + """ + logging.debug("get mobility configs: %s", request) + session = self.get_session(request.session_id, context) + configs = grpcutils.get_mobility_configs(session) + return GetMobilityConfigsResponse(configs=configs) + def GetMobilityConfig( self, request: GetMobilityConfigRequest, context: ServicerContext ) -> GetMobilityConfigResponse: @@ -862,7 +1081,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-mobility-configuration response """ - logger.debug("get mobility config: %s", request) + logging.debug("get mobility config: %s", request) session = self.get_session(request.session_id, context) current_config = session.mobility.get_model_config( request.node_id, Ns2ScriptedMobility.name @@ -881,7 +1100,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: set-mobility-configuration response """ - logger.debug("set mobility config: %s", request) + logging.debug("set mobility config: %s", request) session = self.get_session(request.session_id, context) mobility_config = request.mobility_config session.mobility.set_model_config( @@ -900,7 +1119,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: mobility-action response """ - logger.debug("mobility action: %s", request) + logging.debug("mobility action: %s", request) session = self.get_session(request.session_id, context) node = grpcutils.get_mobility_node(session, request.node_id, context) if not node.mobility: @@ -918,6 +1137,24 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): result = False return MobilityActionResponse(result=result) + def GetServices( + self, request: GetServicesRequest, context: ServicerContext + ) -> GetServicesResponse: + """ + Retrieve all the services that are running + + :param request: get-service request + :param context: context object + :return: get-services response + """ + logging.debug("get services: %s", request) + services = [] + for name in ServiceManager.services: + service = ServiceManager.services[name] + service_proto = Service(group=service.group, name=service.name) + services.append(service_proto) + return GetServicesResponse(services=services) + def GetServiceDefaults( self, request: GetServiceDefaultsRequest, context: ServicerContext ) -> GetServiceDefaultsResponse: @@ -928,7 +1165,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-service-defaults response about all the available default services """ - logger.debug("get service defaults: %s", request) + logging.debug("get service defaults: %s", request) session = self.get_session(request.session_id, context) defaults = grpcutils.get_default_services(session) return GetServiceDefaultsResponse(defaults=defaults) @@ -943,15 +1180,31 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: set-service-defaults response """ - logger.debug("set service defaults: %s", request) + logging.debug("set service defaults: %s", request) session = self.get_session(request.session_id, context) session.services.default_services.clear() for service_defaults in request.defaults: session.services.default_services[ - service_defaults.model + service_defaults.node_type ] = service_defaults.services return SetServiceDefaultsResponse(result=True) + def GetNodeServiceConfigs( + self, request: GetNodeServiceConfigsRequest, context: ServicerContext + ) -> GetNodeServiceConfigsResponse: + """ + Retrieve all node service configurations. + + :param request: + get-node-service request + :param context: context object + :return: all node service configs response + """ + logging.debug("get node service configs: %s", request) + session = self.get_session(request.session_id, context) + configs = grpcutils.get_node_service_configs(session) + return GetNodeServiceConfigsResponse(configs=configs) + def GetNodeService( self, request: GetNodeServiceRequest, context: ServicerContext ) -> GetNodeServiceResponse: @@ -963,7 +1216,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-node-service response about the requested service """ - logger.debug("get node service: %s", request) + logging.debug("get node service: %s", request) session = self.get_session(request.session_id, context) service = session.services.get_service( request.node_id, request.service, default_service=True @@ -982,7 +1235,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-node-service response about the requested service """ - logger.debug("get node service file: %s", request) + logging.debug("get node service file: %s", request) session = self.get_session(request.session_id, context) node = self.get_node(session, request.node_id, context, CoreNode) file_data = session.services.get_service_file( @@ -990,6 +1243,42 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): ) return GetNodeServiceFileResponse(data=file_data.data) + def SetNodeService( + self, request: SetNodeServiceRequest, context: ServicerContext + ) -> SetNodeServiceResponse: + """ + Set a node service for a node + + :param request: set-node-service + request that has info to set a node service + :param context: context object + :return: set-node-service response + """ + logging.debug("set node service: %s", request) + session = self.get_session(request.session_id, context) + config = request.config + grpcutils.service_configuration(session, config) + return SetNodeServiceResponse(result=True) + + def SetNodeServiceFile( + self, request: SetNodeServiceFileRequest, context: ServicerContext + ) -> SetNodeServiceFileResponse: + """ + Store the customized service file in the service config + + :param request: + set-node-service-file request + :param context: context object + :return: set-node-service-file response + """ + logging.debug("set node service file: %s", request) + session = self.get_session(request.session_id, context) + config = request.config + session.services.set_service_file( + config.node_id, config.service, config.file, config.data + ) + return SetNodeServiceFileResponse(result=True) + def ServiceAction( self, request: ServiceActionRequest, context: ServicerContext ) -> ServiceActionResponse: @@ -1001,7 +1290,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: service-action response about status of action """ - logger.debug("service action: %s", request) + logging.debug("service action: %s", request) session = self.get_session(request.session_id, context) node = self.get_node(session, request.node_id, context, CoreNode) service = None @@ -1031,47 +1320,20 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): return ServiceActionResponse(result=result) - def ConfigServiceAction( - self, request: ServiceActionRequest, context: ServicerContext - ) -> ServiceActionResponse: + def GetWlanConfigs( + self, request: GetWlanConfigsRequest, context: ServicerContext + ) -> GetWlanConfigsResponse: """ - Take action whether to start, stop, restart, validate the config service or - none of the above. + Retrieve all wireless-lan configurations. - :param request: service action request - :param context: context object - :return: service action response about status of action + :param request: request + :param context: core.api.grpc.core_pb2.GetWlanConfigResponse + :return: all wlan configurations """ - logger.debug("service action: %s", request) + logging.debug("get wlan configs: %s", request) session = self.get_session(request.session_id, context) - node = self.get_node(session, request.node_id, context, CoreNode) - service = node.config_services.get(request.service) - if not service: - context.abort(grpc.StatusCode.NOT_FOUND, "config service not found") - result = False - if request.action == ServiceAction.START: - try: - service.start() - result = True - except ConfigServiceBootError: - pass - elif request.action == ServiceAction.STOP: - service.stop() - result = True - elif request.action == ServiceAction.RESTART: - service.stop() - try: - service.start() - result = True - except ConfigServiceBootError: - pass - elif request.action == ServiceAction.VALIDATE: - try: - service.run_validation() - result = True - except ConfigServiceBootError: - pass - return ServiceActionResponse(result=result) + configs = grpcutils.get_wlan_configs(session) + return GetWlanConfigsResponse(configs=configs) def GetWlanConfig( self, request: GetWlanConfigRequest, context: ServicerContext @@ -1083,7 +1345,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: core.api.grpc.core_pb2.GetWlanConfigResponse :return: get-wlan-configuration response about the wlan configuration of a node """ - logger.debug("get wlan config: %s", request) + logging.debug("get wlan config: %s", request) session = self.get_session(request.session_id, context) current_config = session.mobility.get_model_config( request.node_id, BasicRangeModel.name @@ -1101,16 +1363,62 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: set-wlan-configuration response """ - logger.debug("set wlan config: %s", request) + logging.debug("set wlan config: %s", request) session = self.get_session(request.session_id, context) node_id = request.wlan_config.node_id config = request.wlan_config.config session.mobility.set_model_config(node_id, BasicRangeModel.name, config) - if session.is_running(): + if session.state == EventTypes.RUNTIME_STATE: node = self.get_node(session, node_id, context, WlanNode) node.updatemodel(config) return SetWlanConfigResponse(result=True) + def GetEmaneConfig( + self, request: GetEmaneConfigRequest, context: ServicerContext + ) -> GetEmaneConfigResponse: + """ + Retrieve EMANE configuration of a session + + :param request: get-EMANE-configuration request + :param context: context object + :return: get-EMANE-configuration response + """ + logging.debug("get emane config: %s", request) + session = self.get_session(request.session_id, context) + config = grpcutils.get_emane_config(session) + return GetEmaneConfigResponse(config=config) + + def SetEmaneConfig( + self, request: SetEmaneConfigRequest, context: ServicerContext + ) -> SetEmaneConfigResponse: + """ + Set EMANE configuration of a session + + :param request: set-EMANE-configuration request + :param context: context object + :return: set-EMANE-configuration response + """ + logging.debug("set emane config: %s", request) + session = self.get_session(request.session_id, context) + config = session.emane.get_configs() + config.update(request.config) + return SetEmaneConfigResponse(result=True) + + def GetEmaneModels( + self, request: GetEmaneModelsRequest, context: ServicerContext + ) -> GetEmaneModelsResponse: + """ + Retrieve all the EMANE models in the session + + :param request: get-emane-model request + :param context: context object + :return: get-EMANE-models response that has all the models + """ + logging.debug("get emane models: %s", request) + session = self.get_session(request.session_id, context) + models = grpcutils.get_emane_models(session) + return GetEmaneModelsResponse(models=models) + def GetEmaneModelConfig( self, request: GetEmaneModelConfigRequest, context: ServicerContext ) -> GetEmaneModelConfigResponse: @@ -1122,11 +1430,13 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: get-EMANE-model-configuration response """ - logger.debug("get emane model config: %s", request) + logging.debug("get emane model config: %s", request) session = self.get_session(request.session_id, context) - model = session.emane.get_model(request.model) + model = session.emane.models.get(request.model) + if not model: + raise CoreError(f"invalid emane model: {request.model}") _id = utils.iface_config_id(request.node_id, request.iface_id) - current_config = session.emane.get_config(_id, request.model) + current_config = session.emane.get_model_config(_id, request.model) config = get_config_options(current_config, model) return GetEmaneModelConfigResponse(config=config) @@ -1141,13 +1451,30 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: set-EMANE-model-configuration response """ - logger.debug("set emane model config: %s", request) + logging.debug("set emane model config: %s", request) session = self.get_session(request.session_id, context) model_config = request.emane_model_config _id = utils.iface_config_id(model_config.node_id, model_config.iface_id) - session.emane.set_config(_id, model_config.model, model_config.config) + session.emane.set_model_config(_id, model_config.model, model_config.config) return SetEmaneModelConfigResponse(result=True) + def GetEmaneModelConfigs( + self, request: GetEmaneModelConfigsRequest, context: ServicerContext + ) -> GetEmaneModelConfigsResponse: + """ + Retrieve all EMANE model configurations of a session + + :param request: + get-EMANE-model-configurations request + :param context: context object + :return: get-EMANE-model-configurations response that has all the EMANE + configurations + """ + logging.debug("get emane model configs: %s", request) + session = self.get_session(request.session_id, context) + configs = grpcutils.get_emane_model_configs(session) + return GetEmaneModelConfigsResponse(configs=configs) + def SaveXml( self, request: core_pb2.SaveXmlRequest, context: ServicerContext ) -> core_pb2.SaveXmlResponse: @@ -1158,12 +1485,15 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: save-xml response """ - logger.debug("save xml: %s", request) + logging.debug("save xml: %s", request) session = self.get_session(request.session_id, context) + _, temp_path = tempfile.mkstemp() session.save_xml(temp_path) + with open(temp_path, "r") as xml_file: data = xml_file.read() + return core_pb2.SaveXmlResponse(data=data) def OpenXml( @@ -1176,20 +1506,20 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: Open-XML response or raise an exception if invalid XML file """ - logger.debug("open xml: %s", request) + logging.debug("open xml: %s", request) session = self.coreemu.create_session() + temp = tempfile.NamedTemporaryFile(delete=False) - temp.write(request.data.encode()) + temp.write(request.data.encode("utf-8")) temp.close() - temp_path = Path(temp.name) - file_path = Path(request.file) + try: - session.open_xml(temp_path, request.start) - session.name = file_path.name - session.file_path = file_path + session.open_xml(temp.name, request.start) + session.name = os.path.basename(request.file) + session.file_name = request.file return core_pb2.OpenXmlResponse(session_id=session.id, result=True) except IOError: - logger.exception("error opening session file") + logging.exception("error opening session file") self.coreemu.delete_session(session.id) context.abort(grpc.StatusCode.INVALID_ARGUMENT, "invalid xml file") finally: @@ -1199,8 +1529,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): self, request: core_pb2.GetInterfacesRequest, context: ServicerContext ) -> core_pb2.GetInterfacesResponse: """ - Retrieve all the interfaces of the system including bridges, virtual ethernet, - and loopback. + Retrieve all the interfaces of the system including bridges, virtual ethernet, and loopback :param request: get-interfaces request :param context: context object @@ -1223,16 +1552,68 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: context object :return: emane link response with success status """ - logger.debug("emane link: %s", request) + logging.debug("emane link: %s", request) session = self.get_session(request.session_id, context) - flag = MessageFlags.ADD if request.linked else MessageFlags.DELETE - link = session.emane.get_nem_link(request.nem1, request.nem2, flag) - if link: + nem1 = request.nem1 + iface1 = session.emane.get_iface(nem1) + if not iface1: + context.abort(grpc.StatusCode.NOT_FOUND, f"nem one {nem1} not found") + node1 = iface1.node + + nem2 = request.nem2 + iface2 = session.emane.get_iface(nem2) + if not iface2: + context.abort(grpc.StatusCode.NOT_FOUND, f"nem two {nem2} not found") + node2 = iface2.node + + if iface1.net == iface2.net: + if request.linked: + flag = MessageFlags.ADD + else: + flag = MessageFlags.DELETE + color = session.get_link_color(iface1.net.id) + link = LinkData( + message_type=flag, + type=LinkTypes.WIRELESS, + node1_id=node1.id, + node2_id=node2.id, + network_id=iface1.net.id, + color=color, + ) session.broadcast_link(link) return EmaneLinkResponse(result=True) else: return EmaneLinkResponse(result=False) + def GetConfigServices( + self, request: GetConfigServicesRequest, context: ServicerContext + ) -> GetConfigServicesResponse: + """ + Gets all currently known configuration services. + + :param request: get config services request + :param context: grpc context + :return: get config services response + """ + services = [] + for service in self.coreemu.service_manager.services.values(): + service_proto = ConfigService( + name=service.name, + group=service.group, + executables=service.executables, + dependencies=service.dependencies, + directories=service.directories, + files=service.files, + startup=service.startup, + validate=service.validate, + shutdown=service.shutdown, + validation_mode=service.validation_mode.value, + validation_timer=service.validation_timer, + validation_period=service.validation_period, + ) + services.append(service_proto) + return GetConfigServicesResponse(services=services) + def GetNodeConfigService( self, request: GetNodeConfigServiceRequest, context: ServicerContext ) -> GetNodeConfigServiceResponse: @@ -1254,27 +1635,6 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): config = {x.id: x.default for x in service.default_configs} return GetNodeConfigServiceResponse(config=config) - def GetConfigServiceRendered( - self, request: GetConfigServiceRenderedRequest, context: ServicerContext - ) -> GetConfigServiceRenderedResponse: - """ - Retrieves the rendered file data for a given config service on a node. - - :param request: config service render request - :param context: grpc context - :return: rendered config service files - """ - session = self.get_session(request.session_id, context) - node = self.get_node(session, request.node_id, context, CoreNode) - self.validate_service(request.name, context) - service = node.config_services.get(request.name) - if not service: - context.abort( - grpc.StatusCode.NOT_FOUND, f"unknown node service {request.name}" - ) - rendered = service.get_rendered_templates() - return GetConfigServiceRenderedResponse(rendered=rendered) - def GetConfigServiceDefaults( self, request: GetConfigServiceDefaultsRequest, context: ServicerContext ) -> GetConfigServiceDefaultsResponse: @@ -1285,10 +1645,8 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): :param context: grpc context :return: get config service defaults response """ - session = self.get_session(request.session_id, context) - node = self.get_node(session, request.node_id, context, CoreNode) service_class = self.validate_service(request.name, context) - service = service_class(node) + service = service_class(None) templates = service.get_templates() config = {} for configuration in service.default_configs: @@ -1309,21 +1667,82 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): templates=templates, config=config, modes=modes ) + def GetNodeConfigServiceConfigs( + self, request: GetNodeConfigServiceConfigsRequest, context: ServicerContext + ) -> GetNodeConfigServiceConfigsResponse: + """ + Get current custom templates and config for configuration services for a given + node. + + :param request: get node config service configs request + :param context: grpc context + :return: get node config service configs response + """ + session = self.get_session(request.session_id, context) + configs = grpcutils.get_node_config_service_configs(session) + return GetNodeConfigServiceConfigsResponse(configs=configs) + + def GetNodeConfigServices( + self, request: GetNodeConfigServicesRequest, context: ServicerContext + ) -> GetNodeConfigServicesResponse: + """ + Get configuration services for a given node. + + :param request: get node config services request + :param context: grpc context + :return: get node config services response + """ + session = self.get_session(request.session_id, context) + node = self.get_node(session, request.node_id, context, CoreNode) + services = node.config_services.keys() + return GetNodeConfigServicesResponse(services=services) + + def SetNodeConfigService( + self, request: SetNodeConfigServiceRequest, context: ServicerContext + ) -> SetNodeConfigServiceResponse: + """ + Set custom config, for a given configuration service, for a given node. + + :param request: set node config service request + :param context: grpc context + :return: set node config service response + """ + session = self.get_session(request.session_id, context) + node = self.get_node(session, request.node_id, context, CoreNode) + self.validate_service(request.name, context) + service = node.config_services.get(request.name) + if service: + service.set_config(request.config) + return SetNodeConfigServiceResponse(result=True) + else: + context.abort( + grpc.StatusCode.NOT_FOUND, + f"node {node.name} missing service {request.name}", + ) + def GetEmaneEventChannel( self, request: GetEmaneEventChannelRequest, context: ServicerContext ) -> GetEmaneEventChannelResponse: session = self.get_session(request.session_id, context) - service = session.emane.nem_service.get(request.nem_id) - if not service: - context.abort(grpc.StatusCode.NOT_FOUND, f"unknown nem id {request.nem_id}") - return GetEmaneEventChannelResponse( - group=service.group, port=service.port, device=service.device - ) + group = None + port = None + device = None + if session.emane.eventchannel: + group, port, device = session.emane.eventchannel + return GetEmaneEventChannelResponse(group=group, port=port, device=device) def ExecuteScript(self, request, context): existing_sessions = set(self.coreemu.sessions.keys()) - file_path = Path(request.script) - utils.execute_script(self.coreemu, file_path, request.args) + thread = threading.Thread( + target=utils.execute_file, + args=( + request.script, + {"__file__": request.script, "coreemu": self.coreemu}, + ), + daemon=True, + ) + thread.start() + thread.join() current_sessions = set(self.coreemu.sessions.keys()) new_sessions = list(current_sessions.difference(existing_sessions)) new_session = -1 @@ -1336,21 +1755,18 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): ) -> WlanLinkResponse: session = self.get_session(request.session_id, context) wlan = self.get_node(session, request.wlan, context, WlanNode) - if not isinstance(wlan.wireless_model, BasicRangeModel): + if not isinstance(wlan.model, BasicRangeModel): context.abort( grpc.StatusCode.NOT_FOUND, - f"wlan node {request.wlan} is not using BasicRangeModel", + f"wlan node {request.wlan} does not using BasicRangeModel", ) node1 = self.get_node(session, request.node1_id, context, CoreNode) node2 = self.get_node(session, request.node2_id, context, CoreNode) node1_iface, node2_iface = None, None - for iface in node1.get_ifaces(control=False): - if iface.net == wlan: - node1_iface = iface - break - for iface in node2.get_ifaces(control=False): - if iface.net == wlan: - node2_iface = iface + for net, iface1, iface2 in node1.commonnets(node2): + if net == wlan: + node1_iface = iface1 + node2_iface = iface2 break result = False if node1_iface and node2_iface: @@ -1358,9 +1774,7 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): wlan.link(node1_iface, node2_iface) else: wlan.unlink(node1_iface, node2_iface) - wlan.wireless_model.sendlinkmsg( - node1_iface, node2_iface, unlink=not request.linked - ) + wlan.model.sendlinkmsg(node1_iface, node2_iface, unlink=not request.linked) result = True return WlanLinkResponse(result=result) @@ -1377,60 +1791,3 @@ class CoreGrpcServer(core_pb2_grpc.CoreApiServicer): nem2 = grpcutils.get_nem_id(session, node2, request.iface2_id, context) session.emane.publish_pathloss(nem1, nem2, request.rx1, request.rx2) return EmanePathlossesResponse() - - def Linked( - self, request: LinkedRequest, context: ServicerContext - ) -> LinkedResponse: - session = self.get_session(request.session_id, context) - session.linked( - request.node1_id, - request.node2_id, - request.iface1_id, - request.iface2_id, - request.linked, - ) - return LinkedResponse() - - def WirelessLinked( - self, request: WirelessLinkedRequest, context: ServicerContext - ) -> WirelessLinkedResponse: - session = self.get_session(request.session_id, context) - wireless = self.get_node(session, request.wireless_id, context, WirelessNode) - wireless.link_control(request.node1_id, request.node2_id, request.linked) - return WirelessLinkedResponse() - - def WirelessConfig( - self, request: WirelessConfigRequest, context: ServicerContext - ) -> WirelessConfigResponse: - session = self.get_session(request.session_id, context) - wireless = self.get_node(session, request.wireless_id, context, WirelessNode) - options1 = request.options1 - options2 = options1 - if request.HasField("options2"): - options2 = request.options2 - options1 = grpcutils.convert_options_proto(options1) - options2 = grpcutils.convert_options_proto(options2) - wireless.link_config(request.node1_id, request.node2_id, options1, options2) - return WirelessConfigResponse() - - def GetWirelessConfig( - self, request: GetWirelessConfigRequest, context: ServicerContext - ) -> GetWirelessConfigResponse: - session = self.get_session(request.session_id, context) - try: - wireless = session.get_node(request.node_id, WirelessNode) - configs = wireless.get_config() - except CoreError: - configs = {x.id: x for x in WirelessNode.options} - config_options = {} - for config in configs.values(): - config_option = common_pb2.ConfigOption( - label=config.label, - name=config.id, - value=config.default, - type=config.type.value, - select=config.options, - group=config.group, - ) - config_options[config.id] = config_option - return GetWirelessConfigResponse(config=config_options) diff --git a/daemon/core/api/grpc/wrappers.py b/daemon/core/api/grpc/wrappers.py index f84e6a08..8cc55446 100644 --- a/daemon/core/api/grpc/wrappers.py +++ b/daemon/core/api/grpc/wrappers.py @@ -1,7 +1,7 @@ from dataclasses import dataclass, field from enum import Enum from pathlib import Path -from typing import Any, Optional +from typing import Any, Dict, List, Optional, Set, Tuple from core.api.grpc import ( common_pb2, @@ -67,8 +67,6 @@ class NodeType(Enum): CONTROL_NET = 13 DOCKER = 15 LXC = 16 - WIRELESS = 17 - PODMAN = 18 class LinkType(Enum): @@ -115,13 +113,13 @@ class EventType: class ConfigService: group: str name: str - executables: list[str] - dependencies: list[str] - directories: list[str] - files: list[str] - startup: list[str] - validate: list[str] - shutdown: list[str] + executables: List[str] + dependencies: List[str] + directories: List[str] + files: List[str] + startup: List[str] + validate: List[str] + shutdown: List[str] validation_mode: ConfigServiceValidationMode validation_timer: int validation_period: float @@ -148,8 +146,8 @@ class ConfigService: class ConfigServiceConfig: node_id: int name: str - templates: dict[str, str] - config: dict[str, str] + templates: Dict[str, str] + config: Dict[str, str] @classmethod def from_proto( @@ -165,40 +163,26 @@ class ConfigServiceConfig: @dataclass class ConfigServiceData: - templates: dict[str, str] = field(default_factory=dict) - config: dict[str, str] = field(default_factory=dict) + templates: Dict[str, str] = field(default_factory=dict) + config: Dict[str, str] = field(default_factory=dict) @dataclass class ConfigServiceDefaults: - templates: dict[str, str] - config: dict[str, "ConfigOption"] - modes: dict[str, dict[str, str]] + templates: Dict[str, str] + config: Dict[str, "ConfigOption"] + modes: List[str] @classmethod def from_proto( - cls, proto: configservices_pb2.GetConfigServiceDefaultsResponse + cls, proto: configservices_pb2.GetConfigServicesResponse ) -> "ConfigServiceDefaults": config = ConfigOption.from_dict(proto.config) - modes = {x.name: dict(x.config) for x in proto.modes} return ConfigServiceDefaults( - templates=dict(proto.templates), config=config, modes=modes + templates=dict(proto.templates), config=config, modes=list(proto.modes) ) -@dataclass -class Server: - name: str - host: str - - @classmethod - def from_proto(cls, proto: core_pb2.Server) -> "Server": - return Server(name=proto.name, host=proto.host) - - def to_proto(self) -> core_pb2.Server: - return core_pb2.Server(name=self.name, host=self.host) - - @dataclass class Service: group: str @@ -211,26 +195,26 @@ class Service: @dataclass class ServiceDefault: - model: str - services: list[str] + node_type: str + services: List[str] @classmethod def from_proto(cls, proto: services_pb2.ServiceDefaults) -> "ServiceDefault": - return ServiceDefault(model=proto.model, services=list(proto.services)) + return ServiceDefault(node_type=proto.node_type, services=list(proto.services)) @dataclass class NodeServiceData: - executables: list[str] = field(default_factory=list) - dependencies: list[str] = field(default_factory=list) - dirs: list[str] = field(default_factory=list) - configs: list[str] = field(default_factory=list) - startup: list[str] = field(default_factory=list) - validate: list[str] = field(default_factory=list) - validation_mode: ServiceValidationMode = ServiceValidationMode.NON_BLOCKING - validation_timer: int = 5 - shutdown: list[str] = field(default_factory=list) - meta: str = None + executables: List[str] + dependencies: List[str] + dirs: List[str] + configs: List[str] + startup: List[str] + validate: List[str] + validation_mode: ServiceValidationMode + validation_timer: int + shutdown: List[str] + meta: str @classmethod def from_proto(cls, proto: services_pb2.NodeServiceData) -> "NodeServiceData": @@ -241,53 +225,22 @@ class NodeServiceData: configs=proto.configs, startup=proto.startup, validate=proto.validate, - validation_mode=ServiceValidationMode(proto.validation_mode), + validation_mode=proto.validation_mode, validation_timer=proto.validation_timer, shutdown=proto.shutdown, meta=proto.meta, ) - def to_proto(self) -> services_pb2.NodeServiceData: - return services_pb2.NodeServiceData( - executables=self.executables, - dependencies=self.dependencies, - dirs=self.dirs, - configs=self.configs, - startup=self.startup, - validate=self.validate, - validation_mode=self.validation_mode.value, - validation_timer=self.validation_timer, - shutdown=self.shutdown, - meta=self.meta, - ) - - -@dataclass -class NodeServiceConfig: - node_id: int - service: str - data: NodeServiceData - files: dict[str, str] = field(default_factory=dict) - - @classmethod - def from_proto(cls, proto: services_pb2.NodeServiceConfig) -> "NodeServiceConfig": - return NodeServiceConfig( - node_id=proto.node_id, - service=proto.service, - data=NodeServiceData.from_proto(proto.data), - files=dict(proto.files), - ) - @dataclass class ServiceConfig: node_id: int service: str - files: list[str] = None - directories: list[str] = None - startup: list[str] = None - validate: list[str] = None - shutdown: list[str] = None + files: List[str] = None + directories: List[str] = None + startup: List[str] = None + validate: List[str] = None + shutdown: List[str] = None def to_proto(self) -> services_pb2.ServiceConfig: return services_pb2.ServiceConfig( @@ -301,19 +254,6 @@ class ServiceConfig: ) -@dataclass -class ServiceFileConfig: - node_id: int - service: str - file: str - data: str = field(repr=False) - - def to_proto(self) -> services_pb2.ServiceFileConfig: - return services_pb2.ServiceFileConfig( - node_id=self.node_id, service=self.service, file=self.file, data=self.data - ) - - @dataclass class BridgeThroughput: node_id: int @@ -340,8 +280,8 @@ class InterfaceThroughput: @dataclass class ThroughputsEvent: session_id: int - bridge_throughputs: list[BridgeThroughput] - iface_throughputs: list[InterfaceThroughput] + bridge_throughputs: List[BridgeThroughput] + iface_throughputs: List[InterfaceThroughput] @classmethod def from_proto(cls, proto: core_pb2.ThroughputsEvent) -> "ThroughputsEvent": @@ -424,49 +364,37 @@ class ExceptionEvent: @dataclass class ConfigOption: + label: str name: str value: str - label: str = None - type: ConfigOptionType = None - group: str = None - select: list[str] = None + type: ConfigOptionType + group: str + select: List[str] = None @classmethod def from_dict( - cls, config: dict[str, common_pb2.ConfigOption] - ) -> dict[str, "ConfigOption"]: + cls, config: Dict[str, common_pb2.ConfigOption] + ) -> Dict[str, "ConfigOption"]: d = {} for key, value in config.items(): d[key] = ConfigOption.from_proto(value) return d @classmethod - def to_dict(cls, config: dict[str, "ConfigOption"]) -> dict[str, str]: + def to_dict(cls, config: Dict[str, "ConfigOption"]) -> Dict[str, str]: return {k: v.value for k, v in config.items()} @classmethod def from_proto(cls, proto: common_pb2.ConfigOption) -> "ConfigOption": - config_type = ConfigOptionType(proto.type) if proto.type is not None else None return ConfigOption( label=proto.label, name=proto.name, value=proto.value, - type=config_type, + type=ConfigOptionType(proto.type), group=proto.group, select=proto.select, ) - def to_proto(self) -> common_pb2.ConfigOption: - config_type = self.type.value if self.type is not None else None - return common_pb2.ConfigOption( - label=self.label, - name=self.name, - value=self.value, - type=config_type, - select=self.select, - group=self.group, - ) - @dataclass class Interface: @@ -482,8 +410,6 @@ class Interface: mtu: int = None node_id: int = None net2_id: int = None - nem_id: int = None - nem_port: int = None @classmethod def from_proto(cls, proto: core_pb2.Interface) -> "Interface": @@ -500,8 +426,6 @@ class Interface: mtu=proto.mtu, node_id=proto.node_id, net2_id=proto.net2_id, - nem_id=proto.nem_id, - nem_port=proto.nem_port, ) def to_proto(self) -> core_pb2.Interface: @@ -643,15 +567,6 @@ class SessionSummary: dir=proto.dir, ) - def to_proto(self) -> core_pb2.SessionSummary: - return core_pb2.SessionSummary( - id=self.id, - state=self.state.value, - nodes=self.nodes, - file=self.file, - dir=self.dir, - ) - @dataclass class Hook: @@ -672,7 +587,7 @@ class EmaneModelConfig: node_id: int model: str iface_id: int = -1 - config: dict[str, ConfigOption] = None + config: Dict[str, ConfigOption] = None @classmethod def from_proto(cls, proto: emane_pb2.GetEmaneModelConfig) -> "EmaneModelConfig": @@ -683,12 +598,11 @@ class EmaneModelConfig: ) def to_proto(self) -> emane_pb2.EmaneModelConfig: - config = ConfigOption.to_dict(self.config) return emane_pb2.EmaneModelConfig( node_id=self.node_id, model=self.model, iface_id=self.iface_id, - config=config, + config=self.config, ) @@ -721,13 +635,13 @@ class Geo: @dataclass class Node: - id: int = None - name: str = None - type: NodeType = NodeType.DEFAULT + id: int + name: str + type: NodeType model: str = None - position: Position = Position(x=0, y=0) - services: set[str] = field(default_factory=set) - config_services: set[str] = field(default_factory=set) + position: Position = None + services: Set[str] = field(default_factory=set) + config_services: Set[str] = field(default_factory=set) emane: str = None icon: str = None image: str = None @@ -735,49 +649,30 @@ class Node: geo: Geo = None dir: str = None channel: str = None - canvas: int = None # configurations - emane_model_configs: dict[ - tuple[str, Optional[int]], dict[str, ConfigOption] + emane_model_configs: Dict[ + Tuple[str, Optional[int]], Dict[str, ConfigOption] ] = field(default_factory=dict, repr=False) - wlan_config: dict[str, ConfigOption] = field(default_factory=dict, repr=False) - wireless_config: dict[str, ConfigOption] = field(default_factory=dict, repr=False) - mobility_config: dict[str, ConfigOption] = field(default_factory=dict, repr=False) - service_configs: dict[str, NodeServiceData] = field( + wlan_config: Dict[str, ConfigOption] = field(default_factory=dict, repr=False) + mobility_config: Dict[str, ConfigOption] = field(default_factory=dict, repr=False) + service_configs: Dict[str, NodeServiceData] = field( default_factory=dict, repr=False ) - service_file_configs: dict[str, dict[str, str]] = field( + service_file_configs: Dict[str, Dict[str, str]] = field( default_factory=dict, repr=False ) - config_service_configs: dict[str, ConfigServiceData] = field( + config_service_configs: Dict[str, ConfigServiceData] = field( default_factory=dict, repr=False ) @classmethod def from_proto(cls, proto: core_pb2.Node) -> "Node": - service_configs = {} - service_file_configs = {} - for service, node_config in proto.service_configs.items(): - service_configs[service] = NodeServiceData.from_proto(node_config.data) - service_file_configs[service] = dict(node_config.files) - emane_configs = {} - for emane_config in proto.emane_configs: - iface_id = None if emane_config.iface_id == -1 else emane_config.iface_id - model = emane_config.model - key = (model, iface_id) - emane_configs[key] = ConfigOption.from_dict(emane_config.config) - config_service_configs = {} - for service, service_config in proto.config_service_configs.items(): - config_service_configs[service] = ConfigServiceData( - templates=dict(service_config.templates), - config=dict(service_config.config), - ) return Node( id=proto.id, name=proto.name, type=NodeType(proto.type), - model=proto.model or None, + model=proto.model, position=Position.from_proto(proto.position), services=set(proto.services), config_services=set(proto.config_services), @@ -788,45 +683,9 @@ class Node: geo=Geo.from_proto(proto.geo), dir=proto.dir, channel=proto.channel, - canvas=proto.canvas, - wlan_config=ConfigOption.from_dict(proto.wlan_config), - mobility_config=ConfigOption.from_dict(proto.mobility_config), - service_configs=service_configs, - service_file_configs=service_file_configs, - config_service_configs=config_service_configs, - emane_model_configs=emane_configs, - wireless_config=ConfigOption.from_dict(proto.wireless_config), ) def to_proto(self) -> core_pb2.Node: - emane_configs = [] - for key, config in self.emane_model_configs.items(): - model, iface_id = key - if iface_id is None: - iface_id = -1 - config = {k: v.to_proto() for k, v in config.items()} - emane_config = emane_pb2.NodeEmaneConfig( - iface_id=iface_id, model=model, config=config - ) - emane_configs.append(emane_config) - service_configs = {} - for service, service_data in self.service_configs.items(): - service_configs[service] = services_pb2.NodeServiceConfig( - service=service, data=service_data.to_proto() - ) - for service, file_configs in self.service_file_configs.items(): - service_config = service_configs.get(service) - if service_config: - service_config.files.update(file_configs) - else: - service_configs[service] = services_pb2.NodeServiceConfig( - service=service, files=file_configs - ) - config_service_configs = {} - for service, service_config in self.config_service_configs.items(): - config_service_configs[service] = configservices_pb2.ConfigServiceConfig( - templates=service_config.templates, config=service_config.config - ) return core_pb2.Node( id=self.id, name=self.name, @@ -841,62 +700,60 @@ class Node: server=self.server, dir=self.dir, channel=self.channel, - canvas=self.canvas, - wlan_config={k: v.to_proto() for k, v in self.wlan_config.items()}, - mobility_config={k: v.to_proto() for k, v in self.mobility_config.items()}, - service_configs=service_configs, - config_service_configs=config_service_configs, - emane_configs=emane_configs, - wireless_config={k: v.to_proto() for k, v in self.wireless_config.items()}, ) - def set_wlan(self, config: dict[str, str]) -> None: - for key, value in config.items(): - option = ConfigOption(name=key, value=value) - self.wlan_config[key] = option - - def set_mobility(self, config: dict[str, str]) -> None: - for key, value in config.items(): - option = ConfigOption(name=key, value=value) - self.mobility_config[key] = option - - def set_emane_model( - self, model: str, config: dict[str, str], iface_id: int = None - ) -> None: - key = (model, iface_id) - config_options = self.emane_model_configs.setdefault(key, {}) - for key, value in config.items(): - option = ConfigOption(name=key, value=value) - config_options[key] = option - @dataclass class Session: - id: int = None - state: SessionState = SessionState.DEFINITION - nodes: dict[int, Node] = field(default_factory=dict) - links: list[Link] = field(default_factory=list) - dir: str = None - user: str = None - default_services: dict[str, set[str]] = field(default_factory=dict) - location: SessionLocation = SessionLocation( - x=0.0, y=0.0, z=0.0, lat=47.57917, lon=-122.13232, alt=2.0, scale=150.0 - ) - hooks: dict[str, Hook] = field(default_factory=dict) - metadata: dict[str, str] = field(default_factory=dict) - file: Path = None - options: dict[str, ConfigOption] = field(default_factory=dict) - servers: list[Server] = field(default_factory=list) + id: int + state: SessionState + nodes: Dict[int, Node] + links: List[Link] + dir: str + user: str + default_services: Dict[str, Set[str]] + location: SessionLocation + hooks: Dict[str, Hook] + emane_models: List[str] + emane_config: Dict[str, ConfigOption] + metadata: Dict[str, str] + file: Path @classmethod def from_proto(cls, proto: core_pb2.Session) -> "Session": - nodes: dict[int, Node] = {x.id: Node.from_proto(x) for x in proto.nodes} + nodes: Dict[int, Node] = {x.id: Node.from_proto(x) for x in proto.nodes} links = [Link.from_proto(x) for x in proto.links] - default_services = {x.model: set(x.services) for x in proto.default_services} + default_services = { + x.node_type: set(x.services) for x in proto.default_services + } hooks = {x.file: Hook.from_proto(x) for x in proto.hooks} + # update nodes with their current configurations + for model in proto.emane_model_configs: + iface_id = None + if model.iface_id != -1: + iface_id = model.iface_id + node = nodes[model.node_id] + key = (model.model, iface_id) + node.emane_model_configs[key] = ConfigOption.from_dict(model.config) + for node_id, mapped_config in proto.wlan_configs.items(): + node = nodes[node_id] + node.wlan_config = ConfigOption.from_dict(mapped_config.config) + for config in proto.service_configs: + service = config.service + node = nodes[config.node_id] + node.service_configs[service] = NodeServiceData.from_proto(config.data) + for file, data in config.files.items(): + files = node.service_file_configs.setdefault(service, {}) + files[file] = data + for config in proto.config_service_configs: + node = nodes[config.node_id] + node.config_service_configs[config.name] = ConfigServiceData( + templates=dict(config.templates), config=dict(config.config) + ) + for node_id, mapped_config in proto.mobility_configs.items(): + node = nodes[node_id] + node.mobility_config = ConfigOption.from_dict(mapped_config.config) file_path = Path(proto.file) if proto.file else None - options = ConfigOption.from_dict(proto.options) - servers = [Server.from_proto(x) for x in proto.servers] return Session( id=proto.id, state=SessionState(proto.state), @@ -907,107 +764,10 @@ class Session: default_services=default_services, location=SessionLocation.from_proto(proto.location), hooks=hooks, + emane_models=list(proto.emane_models), + emane_config=ConfigOption.from_dict(proto.emane_config), metadata=dict(proto.metadata), file=file_path, - options=options, - servers=servers, - ) - - def to_proto(self) -> core_pb2.Session: - nodes = [x.to_proto() for x in self.nodes.values()] - links = [x.to_proto() for x in self.links] - hooks = [x.to_proto() for x in self.hooks.values()] - options = {k: v.to_proto() for k, v in self.options.items()} - servers = [x.to_proto() for x in self.servers] - default_services = [] - for model, services in self.default_services.items(): - default_service = services_pb2.ServiceDefaults( - model=model, services=services - ) - default_services.append(default_service) - file = str(self.file) if self.file else None - return core_pb2.Session( - id=self.id, - state=self.state.value, - nodes=nodes, - links=links, - dir=self.dir, - user=self.user, - default_services=default_services, - location=self.location.to_proto(), - hooks=hooks, - metadata=self.metadata, - file=file, - options=options, - servers=servers, - ) - - def add_node( - self, - _id: int, - *, - name: str = None, - _type: NodeType = NodeType.DEFAULT, - model: str = "PC", - position: Position = None, - geo: Geo = None, - emane: str = None, - image: str = None, - server: str = None, - ) -> Node: - node = Node( - id=_id, - name=name, - type=_type, - model=model, - position=position, - geo=geo, - emane=emane, - image=image, - server=server, - ) - self.nodes[node.id] = node - return node - - def add_link( - self, - *, - node1: Node, - node2: Node, - iface1: Interface = None, - iface2: Interface = None, - options: LinkOptions = None, - ) -> Link: - link = Link( - node1_id=node1.id, - node2_id=node2.id, - iface1=iface1, - iface2=iface2, - options=options, - ) - self.links.append(link) - return link - - def set_options(self, config: dict[str, str]) -> None: - for key, value in config.items(): - option = ConfigOption(name=key, value=value) - self.options[key] = option - - -@dataclass -class CoreConfig: - services: list[Service] = field(default_factory=list) - config_services: list[ConfigService] = field(default_factory=list) - emane_models: list[str] = field(default_factory=list) - - @classmethod - def from_proto(cls, proto: core_pb2.GetConfigResponse) -> "CoreConfig": - services = [Service.from_proto(x) for x in proto.services] - config_services = [ConfigService.from_proto(x) for x in proto.config_services] - return CoreConfig( - services=services, - config_services=config_services, - emane_models=list(proto.emane_models), ) @@ -1089,7 +849,7 @@ class ConfigEvent: node_id: int object: str type: int - data_types: list[int] + data_types: List[int] data_values: str captions: str bitmap: str @@ -1109,6 +869,7 @@ class ConfigEvent: data_types=list(proto.data_types), data_values=proto.data_values, captions=proto.captions, + bitmap=proto.bitmap, possible_values=proto.possible_values, groups=proto.groups, iface_id=proto.iface_id, @@ -1200,13 +961,13 @@ class EmanePathlossesRequest: ) -@dataclass(frozen=True) +@dataclass class MoveNodesRequest: session_id: int node_id: int - source: str = field(compare=False, default=None) - position: Position = field(compare=False, default=None) - geo: Geo = field(compare=False, default=None) + source: str = None + position: Position = None + geo: Geo = None def to_proto(self) -> core_pb2.MoveNodesRequest: position = self.position.to_proto() if self.position else None diff --git a/daemon/core/emane/models/__init__.py b/daemon/core/api/tlv/__init__.py similarity index 100% rename from daemon/core/emane/models/__init__.py rename to daemon/core/api/tlv/__init__.py diff --git a/daemon/core/api/tlv/coreapi.py b/daemon/core/api/tlv/coreapi.py new file mode 100644 index 00000000..756b623c --- /dev/null +++ b/daemon/core/api/tlv/coreapi.py @@ -0,0 +1,1016 @@ +""" +Uses coreapi_data for message and TLV types, and defines TLV data +types and objects used for parsing and building CORE API messages. + +CORE API messaging is leveraged for communication with the GUI. +""" + +import binascii +import socket +import struct +from enum import Enum + +import netaddr + +from core.api.tlv import structutils +from core.api.tlv.enumerations import ( + ConfigTlvs, + EventTlvs, + ExceptionTlvs, + ExecuteTlvs, + FileTlvs, + InterfaceTlvs, + LinkTlvs, + MessageTypes, + NodeTlvs, + SessionTlvs, +) +from core.emulator.enumerations import MessageFlags, RegisterTlvs + + +class CoreTlvData: + """ + Helper base class used for packing and unpacking values using struct. + """ + + # format string for packing data + data_format = None + # python data type for the data + data_type = None + # pad length for data after packing + pad_len = None + + @classmethod + def pack(cls, value): + """ + Convenience method for packing data using the struct module. + + :param value: value to pack + :return: length of data and the packed data itself + :rtype: tuple + """ + data = struct.pack(cls.data_format, value) + length = len(data) - cls.pad_len + return length, data + + @classmethod + def unpack(cls, data): + """ + Convenience method for unpacking data using the struct module. + + :param data: data to unpack + :return: the value of the unpacked data + """ + return struct.unpack(cls.data_format, data)[0] + + @classmethod + def pack_string(cls, value): + """ + Convenience method for packing data from a string representation. + + :param str value: value to pack + :return: length of data and the packed data itself + :rtype: tuple + """ + return cls.pack(cls.from_string(value)) + + @classmethod + def from_string(cls, value): + """ + Retrieve the value type from a string representation. + + :param str value: value to get a data type from + :return: value parse from string representation + """ + return cls.data_type(value) + + +class CoreTlvDataObj(CoreTlvData): + """ + Helper class for packing custom object data. + """ + + @classmethod + def pack(cls, value): + """ + Convenience method for packing custom object data. + + :param value: custom object to pack + :return: length of data and the packed data itself + :rtype: tuple + """ + value = cls.get_value(value) + return super().pack(value) + + @classmethod + def unpack(cls, data): + """ + Convenience method for unpacking custom object data. + + :param data: data to unpack custom object from + :return: unpacked custom object + """ + data = super().unpack(data) + return cls.new_obj(data) + + @staticmethod + def get_value(obj): + """ + Method that will be used to retrieve the data to pack from a custom object. + + :param obj: custom object to get data to pack + :return: data value to pack + """ + raise NotImplementedError + + @staticmethod + def new_obj(obj): + """ + Method for retrieving data to unpack from an object. + + :param obj: object to get unpack data from + :return: value of unpacked data + """ + raise NotImplementedError + + +class CoreTlvDataUint16(CoreTlvData): + """ + Helper class for packing uint16 data. + """ + + data_format = "!H" + data_type = int + pad_len = 0 + + +class CoreTlvDataUint32(CoreTlvData): + """ + Helper class for packing uint32 data. + """ + + data_format = "!2xI" + data_type = int + pad_len = 2 + + +class CoreTlvDataUint64(CoreTlvData): + """ + Helper class for packing uint64 data. + """ + + data_format = "!2xQ" + data_type = int + pad_len = 2 + + +class CoreTlvDataString(CoreTlvData): + """ + Helper class for packing string data. + """ + + data_type = str + + @classmethod + def pack(cls, value): + """ + Convenience method for packing string data. + + :param str value: string to pack + :return: length of data packed and the packed data + :rtype: tuple + """ + if not isinstance(value, str): + raise ValueError(f"value not a string: {type(value)}") + value = value.encode("utf-8") + + if len(value) < 256: + header_len = CoreTlv.header_len + else: + header_len = CoreTlv.long_header_len + + pad_len = -(header_len + len(value)) % 4 + return len(value), value + b"\0" * pad_len + + @classmethod + def unpack(cls, data): + """ + Convenience method for unpacking string data. + + :param str data: unpack string data + :return: unpacked string data + """ + return data.rstrip(b"\0").decode("utf-8") + + +class CoreTlvDataUint16List(CoreTlvData): + """ + List of unsigned 16-bit values. + """ + + data_type = tuple + data_format = "!H" + + @classmethod + def pack(cls, values): + """ + Convenience method for packing a uint 16 list. + + :param list values: unint 16 list to pack + :return: length of data packed and the packed data + :rtype: tuple + """ + if not isinstance(values, tuple): + raise ValueError(f"value not a tuple: {values}") + + data = b"" + for value in values: + data += struct.pack(cls.data_format, value) + + pad_len = -(CoreTlv.header_len + len(data)) % 4 + return len(data), data + b"\0" * pad_len + + @classmethod + def unpack(cls, data): + """ + Convenience method for unpacking a uint 16 list. + + :param data: data to unpack + :return: unpacked data + """ + size = int(len(data) / 2) + data_format = f"!{size}H" + return struct.unpack(data_format, data) + + @classmethod + def from_string(cls, value): + """ + Retrieves a unint 16 list from a string + + :param str value: string representation of a uint 16 list + :return: uint 16 list + :rtype: list + """ + return tuple(int(x) for x in value.split()) + + +class CoreTlvDataIpv4Addr(CoreTlvDataObj): + """ + Utility class for packing/unpacking Ipv4 addresses. + """ + + data_type = str + data_format = "!2x4s" + pad_len = 2 + + @staticmethod + def get_value(obj): + """ + Retrieve Ipv4 address value from object. + + :param str obj: ip address to get value from + :return: packed address + :rtype: bytes + """ + return socket.inet_pton(socket.AF_INET, obj) + + @staticmethod + def new_obj(value): + """ + Retrieve Ipv4 address from a string representation. + + :param bytes value: value to get Ipv4 address from + :return: Ipv4 address + :rtype: str + """ + return socket.inet_ntop(socket.AF_INET, value) + + +class CoreTlvDataIPv6Addr(CoreTlvDataObj): + """ + Utility class for packing/unpacking Ipv6 addresses. + """ + + data_format = "!16s2x" + data_type = str + pad_len = 2 + + @staticmethod + def get_value(obj): + """ + Retrieve Ipv6 address value from object. + + :param str obj: ip address to get value from + :return: packed address + :rtype: bytes + """ + return socket.inet_pton(socket.AF_INET6, obj) + + @staticmethod + def new_obj(value): + """ + Retrieve Ipv6 address from a string representation. + + :param bytes value: value to get Ipv4 address from + :return: Ipv4 address + :rtype: str + """ + return socket.inet_ntop(socket.AF_INET6, value) + + +class CoreTlvDataMacAddr(CoreTlvDataObj): + """ + Utility class for packing/unpacking mac addresses. + """ + + data_format = "!2x8s" + data_type = str + pad_len = 2 + + @staticmethod + def get_value(obj): + """ + Retrieve Ipv6 address value from object. + + :param str obj: mac address to get value from + :return: packed mac address + :rtype: bytes + """ + # extend to 64 bits + return b"\0\0" + netaddr.EUI(obj).packed + + @staticmethod + def new_obj(value): + """ + Retrieve mac address from a string representation. + + :param bytes value: value to get Ipv4 address from + :return: mac address + :rtype: str + """ + # only use 48 bits + value = binascii.hexlify(value[2:]).decode() + mac = netaddr.EUI(value, dialect=netaddr.mac_unix_expanded) + return str(mac) + + +class CoreTlv: + """ + Base class for representing CORE TLVs. + """ + + header_format = "!BB" + header_len = struct.calcsize(header_format) + + long_header_format = "!BBH" + long_header_len = struct.calcsize(long_header_format) + + tlv_type_map = Enum + tlv_data_class_map = {} + + def __init__(self, tlv_type, tlv_data): + """ + Create a CoreTlv instance. + + :param int tlv_type: tlv type + :param tlv_data: data to unpack + :return: unpacked data + """ + self.tlv_type = tlv_type + if tlv_data: + try: + self.value = self.tlv_data_class_map[self.tlv_type].unpack(tlv_data) + except KeyError: + self.value = tlv_data + else: + self.value = None + + @classmethod + def unpack(cls, data): + """ + Parse data and return unpacked class. + + :param data: data to unpack + :return: unpacked data class + """ + tlv_type, tlv_len = struct.unpack(cls.header_format, data[: cls.header_len]) + header_len = cls.header_len + if tlv_len == 0: + tlv_type, _zero, tlv_len = struct.unpack( + cls.long_header_format, data[: cls.long_header_len] + ) + header_len = cls.long_header_len + tlv_size = header_len + tlv_len + # for 32-bit alignment + tlv_size += -tlv_size % 4 + return cls(tlv_type, data[header_len:tlv_size]), data[tlv_size:] + + @classmethod + def pack(cls, tlv_type, value): + """ + Pack a TLV value, based on type. + + :param int tlv_type: type of data to pack + :param value: data to pack + :return: header and packed data + """ + tlv_len, tlv_data = cls.tlv_data_class_map[tlv_type].pack(value) + if tlv_len < 256: + hdr = struct.pack(cls.header_format, tlv_type, tlv_len) + else: + hdr = struct.pack(cls.long_header_format, tlv_type, 0, tlv_len) + return hdr + tlv_data + + @classmethod + def pack_string(cls, tlv_type, value): + """ + Pack data type from a string representation + + :param int tlv_type: type of data to pack + :param str value: string representation of data + :return: header and packed data + """ + return cls.pack(tlv_type, cls.tlv_data_class_map[tlv_type].from_string(value)) + + def type_str(self): + """ + Retrieve type string for this data type. + + :return: data type name + :rtype: str + """ + try: + return self.tlv_type_map(self.tlv_type).name + except ValueError: + return f"unknown tlv type: {self.tlv_type}" + + def __str__(self): + """ + String representation of this data type. + + :return: string representation + :rtype: str + """ + return f"{self.__class__.__name__} " + + +class CoreNodeTlv(CoreTlv): + """ + Class for representing CORE Node TLVs. + """ + + tlv_type_map = NodeTlvs + tlv_data_class_map = { + NodeTlvs.NUMBER.value: CoreTlvDataUint32, + NodeTlvs.TYPE.value: CoreTlvDataUint32, + NodeTlvs.NAME.value: CoreTlvDataString, + NodeTlvs.IP_ADDRESS.value: CoreTlvDataIpv4Addr, + NodeTlvs.MAC_ADDRESS.value: CoreTlvDataMacAddr, + NodeTlvs.IP6_ADDRESS.value: CoreTlvDataIPv6Addr, + NodeTlvs.MODEL.value: CoreTlvDataString, + NodeTlvs.EMULATION_SERVER.value: CoreTlvDataString, + NodeTlvs.SESSION.value: CoreTlvDataString, + NodeTlvs.X_POSITION.value: CoreTlvDataUint16, + NodeTlvs.Y_POSITION.value: CoreTlvDataUint16, + NodeTlvs.CANVAS.value: CoreTlvDataUint16, + NodeTlvs.EMULATION_ID.value: CoreTlvDataUint32, + NodeTlvs.NETWORK_ID.value: CoreTlvDataUint32, + NodeTlvs.SERVICES.value: CoreTlvDataString, + NodeTlvs.LATITUDE.value: CoreTlvDataString, + NodeTlvs.LONGITUDE.value: CoreTlvDataString, + NodeTlvs.ALTITUDE.value: CoreTlvDataString, + NodeTlvs.ICON.value: CoreTlvDataString, + NodeTlvs.OPAQUE.value: CoreTlvDataString, + } + + +class CoreLinkTlv(CoreTlv): + """ + Class for representing CORE link TLVs. + """ + + tlv_type_map = LinkTlvs + tlv_data_class_map = { + LinkTlvs.N1_NUMBER.value: CoreTlvDataUint32, + LinkTlvs.N2_NUMBER.value: CoreTlvDataUint32, + LinkTlvs.DELAY.value: CoreTlvDataUint64, + LinkTlvs.BANDWIDTH.value: CoreTlvDataUint64, + LinkTlvs.LOSS.value: CoreTlvDataString, + LinkTlvs.DUP.value: CoreTlvDataString, + LinkTlvs.JITTER.value: CoreTlvDataUint64, + LinkTlvs.MER.value: CoreTlvDataUint16, + LinkTlvs.BURST.value: CoreTlvDataUint16, + LinkTlvs.SESSION.value: CoreTlvDataString, + LinkTlvs.MBURST.value: CoreTlvDataUint16, + LinkTlvs.TYPE.value: CoreTlvDataUint32, + LinkTlvs.GUI_ATTRIBUTES.value: CoreTlvDataString, + LinkTlvs.UNIDIRECTIONAL.value: CoreTlvDataUint16, + LinkTlvs.EMULATION_ID.value: CoreTlvDataUint32, + LinkTlvs.NETWORK_ID.value: CoreTlvDataUint32, + LinkTlvs.KEY.value: CoreTlvDataUint32, + LinkTlvs.IFACE1_NUMBER.value: CoreTlvDataUint16, + LinkTlvs.IFACE1_IP4.value: CoreTlvDataIpv4Addr, + LinkTlvs.IFACE1_IP4_MASK.value: CoreTlvDataUint16, + LinkTlvs.IFACE1_MAC.value: CoreTlvDataMacAddr, + LinkTlvs.IFACE1_IP6.value: CoreTlvDataIPv6Addr, + LinkTlvs.IFACE1_IP6_MASK.value: CoreTlvDataUint16, + LinkTlvs.IFACE2_NUMBER.value: CoreTlvDataUint16, + LinkTlvs.IFACE2_IP4.value: CoreTlvDataIpv4Addr, + LinkTlvs.IFACE2_IP4_MASK.value: CoreTlvDataUint16, + LinkTlvs.IFACE2_MAC.value: CoreTlvDataMacAddr, + LinkTlvs.IFACE2_IP6.value: CoreTlvDataIPv6Addr, + LinkTlvs.IFACE2_IP6_MASK.value: CoreTlvDataUint16, + LinkTlvs.IFACE1_NAME.value: CoreTlvDataString, + LinkTlvs.IFACE2_NAME.value: CoreTlvDataString, + LinkTlvs.OPAQUE.value: CoreTlvDataString, + } + + +class CoreExecuteTlv(CoreTlv): + """ + Class for representing CORE execute TLVs. + """ + + tlv_type_map = ExecuteTlvs + tlv_data_class_map = { + ExecuteTlvs.NODE.value: CoreTlvDataUint32, + ExecuteTlvs.NUMBER.value: CoreTlvDataUint32, + ExecuteTlvs.TIME.value: CoreTlvDataUint32, + ExecuteTlvs.COMMAND.value: CoreTlvDataString, + ExecuteTlvs.RESULT.value: CoreTlvDataString, + ExecuteTlvs.STATUS.value: CoreTlvDataUint32, + ExecuteTlvs.SESSION.value: CoreTlvDataString, + } + + +class CoreRegisterTlv(CoreTlv): + """ + Class for representing CORE register TLVs. + """ + + tlv_type_map = RegisterTlvs + tlv_data_class_map = { + RegisterTlvs.WIRELESS.value: CoreTlvDataString, + RegisterTlvs.MOBILITY.value: CoreTlvDataString, + RegisterTlvs.UTILITY.value: CoreTlvDataString, + RegisterTlvs.EXECUTE_SERVER.value: CoreTlvDataString, + RegisterTlvs.GUI.value: CoreTlvDataString, + RegisterTlvs.EMULATION_SERVER.value: CoreTlvDataString, + RegisterTlvs.SESSION.value: CoreTlvDataString, + } + + +class CoreConfigTlv(CoreTlv): + """ + Class for representing CORE configuration TLVs. + """ + + tlv_type_map = ConfigTlvs + tlv_data_class_map = { + ConfigTlvs.NODE.value: CoreTlvDataUint32, + ConfigTlvs.OBJECT.value: CoreTlvDataString, + ConfigTlvs.TYPE.value: CoreTlvDataUint16, + ConfigTlvs.DATA_TYPES.value: CoreTlvDataUint16List, + ConfigTlvs.VALUES.value: CoreTlvDataString, + ConfigTlvs.CAPTIONS.value: CoreTlvDataString, + ConfigTlvs.BITMAP.value: CoreTlvDataString, + ConfigTlvs.POSSIBLE_VALUES.value: CoreTlvDataString, + ConfigTlvs.GROUPS.value: CoreTlvDataString, + ConfigTlvs.SESSION.value: CoreTlvDataString, + ConfigTlvs.IFACE_ID.value: CoreTlvDataUint16, + ConfigTlvs.NETWORK_ID.value: CoreTlvDataUint32, + ConfigTlvs.OPAQUE.value: CoreTlvDataString, + } + + +class CoreFileTlv(CoreTlv): + """ + Class for representing CORE file TLVs. + """ + + tlv_type_map = FileTlvs + tlv_data_class_map = { + FileTlvs.NODE.value: CoreTlvDataUint32, + FileTlvs.NAME.value: CoreTlvDataString, + FileTlvs.MODE.value: CoreTlvDataString, + FileTlvs.NUMBER.value: CoreTlvDataUint16, + FileTlvs.TYPE.value: CoreTlvDataString, + FileTlvs.SOURCE_NAME.value: CoreTlvDataString, + FileTlvs.SESSION.value: CoreTlvDataString, + FileTlvs.DATA.value: CoreTlvDataString, + FileTlvs.COMPRESSED_DATA.value: CoreTlvDataString, + } + + +class CoreInterfaceTlv(CoreTlv): + """ + Class for representing CORE interface TLVs. + """ + + tlv_type_map = InterfaceTlvs + tlv_data_class_map = { + InterfaceTlvs.NODE.value: CoreTlvDataUint32, + InterfaceTlvs.NUMBER.value: CoreTlvDataUint16, + InterfaceTlvs.NAME.value: CoreTlvDataString, + InterfaceTlvs.IP_ADDRESS.value: CoreTlvDataIpv4Addr, + InterfaceTlvs.MASK.value: CoreTlvDataUint16, + InterfaceTlvs.MAC_ADDRESS.value: CoreTlvDataMacAddr, + InterfaceTlvs.IP6_ADDRESS.value: CoreTlvDataIPv6Addr, + InterfaceTlvs.IP6_MASK.value: CoreTlvDataUint16, + InterfaceTlvs.TYPE.value: CoreTlvDataUint16, + InterfaceTlvs.SESSION.value: CoreTlvDataString, + InterfaceTlvs.STATE.value: CoreTlvDataUint16, + InterfaceTlvs.EMULATION_ID.value: CoreTlvDataUint32, + InterfaceTlvs.NETWORK_ID.value: CoreTlvDataUint32, + } + + +class CoreEventTlv(CoreTlv): + """ + Class for representing CORE event TLVs. + """ + + tlv_type_map = EventTlvs + tlv_data_class_map = { + EventTlvs.NODE.value: CoreTlvDataUint32, + EventTlvs.TYPE.value: CoreTlvDataUint32, + EventTlvs.NAME.value: CoreTlvDataString, + EventTlvs.DATA.value: CoreTlvDataString, + EventTlvs.TIME.value: CoreTlvDataString, + EventTlvs.SESSION.value: CoreTlvDataString, + } + + +class CoreSessionTlv(CoreTlv): + """ + Class for representing CORE session TLVs. + """ + + tlv_type_map = SessionTlvs + tlv_data_class_map = { + SessionTlvs.NUMBER.value: CoreTlvDataString, + SessionTlvs.NAME.value: CoreTlvDataString, + SessionTlvs.FILE.value: CoreTlvDataString, + SessionTlvs.NODE_COUNT.value: CoreTlvDataString, + SessionTlvs.DATE.value: CoreTlvDataString, + SessionTlvs.THUMB.value: CoreTlvDataString, + SessionTlvs.USER.value: CoreTlvDataString, + SessionTlvs.OPAQUE.value: CoreTlvDataString, + } + + +class CoreExceptionTlv(CoreTlv): + """ + Class for representing CORE exception TLVs. + """ + + tlv_type_map = ExceptionTlvs + tlv_data_class_map = { + ExceptionTlvs.NODE.value: CoreTlvDataUint32, + ExceptionTlvs.SESSION.value: CoreTlvDataString, + ExceptionTlvs.LEVEL.value: CoreTlvDataUint16, + ExceptionTlvs.SOURCE.value: CoreTlvDataString, + ExceptionTlvs.DATE.value: CoreTlvDataString, + ExceptionTlvs.TEXT.value: CoreTlvDataString, + ExceptionTlvs.OPAQUE.value: CoreTlvDataString, + } + + +class CoreMessage: + """ + Base class for representing CORE messages. + """ + + header_format = "!BBH" + header_len = struct.calcsize(header_format) + message_type = None + flag_map = MessageFlags + tlv_class = CoreTlv + + def __init__(self, flags, hdr, data): + self.raw_message = hdr + data + self.flags = flags + self.tlv_data = {} + self.parse_data(data) + + @classmethod + def unpack_header(cls, data): + """ + parse data and return (message_type, message_flags, message_len). + + :param str data: data to parse + :return: unpacked tuple + :rtype: tuple + """ + message_type, message_flags, message_len = struct.unpack( + cls.header_format, data[: cls.header_len] + ) + return message_type, message_flags, message_len + + @classmethod + def create(cls, flags, values): + tlv_data = structutils.pack_values(cls.tlv_class, values) + packed = cls.pack(flags, tlv_data) + header_data = packed[: cls.header_len] + return cls(flags, header_data, tlv_data) + + @classmethod + def pack(cls, message_flags, tlv_data): + """ + Pack CORE message data. + + :param message_flags: message flags to pack with data + :param tlv_data: data to get length from for packing + :return: combined header and tlv data + """ + header = struct.pack( + cls.header_format, cls.message_type, message_flags, len(tlv_data) + ) + return header + tlv_data + + def add_tlv_data(self, key, value): + """ + Add TLV data into the data map. + + :param int key: key to store TLV data + :param value: data to associate with key + :return: nothing + """ + if key in self.tlv_data: + raise KeyError(f"key already exists: {key} (val={value})") + + self.tlv_data[key] = value + + def get_tlv(self, tlv_type): + """ + Retrieve TLV data from data map. + + :param int tlv_type: type of data to retrieve + :return: TLV type data + """ + return self.tlv_data.get(tlv_type) + + def parse_data(self, data): + """ + Parse data while possible and adding TLV data to the data map. + + :param data: data to parse for TLV data + :return: nothing + """ + while data: + tlv, data = self.tlv_class.unpack(data) + self.add_tlv_data(tlv.tlv_type, tlv.value) + + def pack_tlv_data(self): + """ + Opposite of parse_data(). Return packed TLV data using self.tlv_data dict. Used by repack(). + + :return: packed data + :rtype: str + """ + keys = sorted(self.tlv_data.keys()) + tlv_data = b"" + for key in keys: + value = self.tlv_data[key] + tlv_data += self.tlv_class.pack(key, value) + return tlv_data + + def repack(self): + """ + Invoke after updating self.tlv_data[] to rebuild self.raw_message. + Useful for modifying a message that has been parsed, before + sending the raw data again. + + :return: nothing + """ + tlv_data = self.pack_tlv_data() + self.raw_message = self.pack(self.flags, tlv_data) + + def type_str(self): + """ + Retrieve data of the message type. + + :return: name of message type + :rtype: str + """ + try: + return MessageTypes(self.message_type).name + except ValueError: + return f"unknown message type: {self.message_type}" + + def flag_str(self): + """ + Retrieve message flag string. + + :return: message flag string + :rtype: str + """ + message_flags = [] + flag = 1 + + while True: + if self.flags & flag: + try: + message_flags.append(self.flag_map(flag).name) + except ValueError: + message_flags.append(f"0x{flag:x}") + flag <<= 1 + if not (self.flags & ~(flag - 1)): + break + + message_flags = " | ".join(message_flags) + return f"0x{self.flags:x} <{message_flags}>" + + def __str__(self): + """ + Retrieve string representation of the message. + + :return: string representation + :rtype: str + """ + result = f"{self.__class__.__name__} " + + for key in self.tlv_data: + value = self.tlv_data[key] + try: + tlv_type = self.tlv_class.tlv_type_map(key).name + except ValueError: + tlv_type = f"tlv type {key}" + + result += f"\n {tlv_type}: {value}" + + return result + + def node_numbers(self): + """ + Return a list of node numbers included in this message. + """ + number1 = None + number2 = None + + # not all messages have node numbers + if self.message_type == MessageTypes.NODE.value: + number1 = self.get_tlv(NodeTlvs.NUMBER.value) + elif self.message_type == MessageTypes.LINK.value: + number1 = self.get_tlv(LinkTlvs.N1_NUMBER.value) + number2 = self.get_tlv(LinkTlvs.N2_NUMBER.value) + elif self.message_type == MessageTypes.EXECUTE.value: + number1 = self.get_tlv(ExecuteTlvs.NODE.value) + elif self.message_type == MessageTypes.CONFIG.value: + number1 = self.get_tlv(ConfigTlvs.NODE.value) + elif self.message_type == MessageTypes.FILE.value: + number1 = self.get_tlv(FileTlvs.NODE.value) + elif self.message_type == MessageTypes.INTERFACE.value: + number1 = self.get_tlv(InterfaceTlvs.NODE.value) + elif self.message_type == MessageTypes.EVENT.value: + number1 = self.get_tlv(EventTlvs.NODE.value) + + result = [] + + if number1: + result.append(number1) + + if number2: + result.append(number2) + + return result + + def session_numbers(self): + """ + Return a list of session numbers included in this message. + """ + result = [] + + if self.message_type == MessageTypes.SESSION.value: + sessions = self.get_tlv(SessionTlvs.NUMBER.value) + elif self.message_type == MessageTypes.EXCEPTION.value: + sessions = self.get_tlv(ExceptionTlvs.SESSION.value) + else: + # All other messages share TLV number 0xA for the session number(s). + sessions = self.get_tlv(NodeTlvs.SESSION.value) + + if sessions: + for session_id in sessions.split("|"): + result.append(int(session_id)) + + return result + + +class CoreNodeMessage(CoreMessage): + """ + CORE node message class. + """ + + message_type = MessageTypes.NODE.value + tlv_class = CoreNodeTlv + + +class CoreLinkMessage(CoreMessage): + """ + CORE link message class. + """ + + message_type = MessageTypes.LINK.value + tlv_class = CoreLinkTlv + + +class CoreExecMessage(CoreMessage): + """ + CORE execute message class. + """ + + message_type = MessageTypes.EXECUTE.value + tlv_class = CoreExecuteTlv + + +class CoreRegMessage(CoreMessage): + """ + CORE register message class. + """ + + message_type = MessageTypes.REGISTER.value + tlv_class = CoreRegisterTlv + + +class CoreConfMessage(CoreMessage): + """ + CORE configuration message class. + """ + + message_type = MessageTypes.CONFIG.value + tlv_class = CoreConfigTlv + + +class CoreFileMessage(CoreMessage): + """ + CORE file message class. + """ + + message_type = MessageTypes.FILE.value + tlv_class = CoreFileTlv + + +class CoreIfaceMessage(CoreMessage): + """ + CORE interface message class. + """ + + message_type = MessageTypes.INTERFACE.value + tlv_class = CoreInterfaceTlv + + +class CoreEventMessage(CoreMessage): + """ + CORE event message class. + """ + + message_type = MessageTypes.EVENT.value + tlv_class = CoreEventTlv + + +class CoreSessionMessage(CoreMessage): + """ + CORE session message class. + """ + + message_type = MessageTypes.SESSION.value + tlv_class = CoreSessionTlv + + +class CoreExceptionMessage(CoreMessage): + """ + CORE exception message class. + """ + + message_type = MessageTypes.EXCEPTION.value + tlv_class = CoreExceptionTlv + + +# map used to translate enumerated message type values to message class objects +CLASS_MAP = { + MessageTypes.NODE.value: CoreNodeMessage, + MessageTypes.LINK.value: CoreLinkMessage, + MessageTypes.EXECUTE.value: CoreExecMessage, + MessageTypes.REGISTER.value: CoreRegMessage, + MessageTypes.CONFIG.value: CoreConfMessage, + MessageTypes.FILE.value: CoreFileMessage, + MessageTypes.INTERFACE.value: CoreIfaceMessage, + MessageTypes.EVENT.value: CoreEventMessage, + MessageTypes.SESSION.value: CoreSessionMessage, + MessageTypes.EXCEPTION.value: CoreExceptionMessage, +} + + +def str_to_list(value): + """ + Helper to convert pipe-delimited string ("a|b|c") into a list (a, b, c). + + :param str value: string to convert + :return: converted list + :rtype: list + """ + + if value is None: + return None + + return value.split("|") diff --git a/daemon/core/api/tlv/corehandlers.py b/daemon/core/api/tlv/corehandlers.py new file mode 100644 index 00000000..65abed8c --- /dev/null +++ b/daemon/core/api/tlv/corehandlers.py @@ -0,0 +1,2097 @@ +""" +socket server request handlers leveraged by core servers. +""" + +import logging +import os +import shlex +import shutil +import socketserver +import sys +import threading +import time +from itertools import repeat +from queue import Empty, Queue +from typing import Optional + +from core import utils +from core.api.tlv import coreapi, dataconversion, structutils +from core.api.tlv.dataconversion import ConfigShim +from core.api.tlv.enumerations import ( + ConfigFlags, + ConfigTlvs, + EventTlvs, + ExceptionTlvs, + ExecuteTlvs, + FileTlvs, + LinkTlvs, + MessageTypes, + NodeTlvs, + SessionTlvs, +) +from core.emulator.data import ( + ConfigData, + EventData, + ExceptionData, + FileData, + InterfaceData, + LinkOptions, + NodeOptions, +) +from core.emulator.enumerations import ( + ConfigDataTypes, + EventTypes, + ExceptionLevels, + LinkTypes, + MessageFlags, + NodeTypes, + RegisterTlvs, +) +from core.emulator.session import Session +from core.errors import CoreCommandError, CoreError +from core.location.mobility import BasicRangeModel +from core.nodes.base import CoreNode, CoreNodeBase, NodeBase +from core.nodes.network import WlanNode +from core.nodes.physical import Rj45Node +from core.services.coreservices import ServiceManager, ServiceShim + + +class CoreHandler(socketserver.BaseRequestHandler): + """ + The CoreHandler class uses the RequestHandler class for servicing requests. + """ + + session_clients = {} + + def __init__(self, request, client_address, server): + """ + Create a CoreRequestHandler instance. + + :param request: request object + :param str client_address: client address + :param CoreServer server: core server instance + """ + self.done = False + self.message_handlers = { + MessageTypes.NODE.value: self.handle_node_message, + MessageTypes.LINK.value: self.handle_link_message, + MessageTypes.EXECUTE.value: self.handle_execute_message, + MessageTypes.REGISTER.value: self.handle_register_message, + MessageTypes.CONFIG.value: self.handle_config_message, + MessageTypes.FILE.value: self.handle_file_message, + MessageTypes.INTERFACE.value: self.handle_iface_message, + MessageTypes.EVENT.value: self.handle_event_message, + MessageTypes.SESSION.value: self.handle_session_message, + } + self.message_queue = Queue() + self.node_status_request = {} + self._shutdown_lock = threading.Lock() + self._sessions_lock = threading.Lock() + + self.handler_threads = [] + thread = threading.Thread(target=self.handler_thread, daemon=True) + thread.start() + self.handler_threads.append(thread) + + self.session: Optional[Session] = None + self.coreemu = server.coreemu + utils.close_onexec(request.fileno()) + socketserver.BaseRequestHandler.__init__(self, request, client_address, server) + + def setup(self): + """ + Client has connected, set up a new connection. + + :return: nothing + """ + logging.debug("new TCP connection: %s", self.client_address) + + def finish(self): + """ + Client has disconnected, end this request handler and disconnect + from the session. Shutdown sessions that are not running. + + :return: nothing + """ + logging.debug("finishing request handler") + logging.debug("remaining message queue size: %s", self.message_queue.qsize()) + + # give some time for message queue to deplete + timeout = 10 + wait = 0 + while not self.message_queue.empty(): + logging.debug("waiting for message queue to empty: %s seconds", wait) + time.sleep(1) + wait += 1 + if wait == timeout: + logging.warning("queue failed to be empty, finishing request handler") + break + + logging.info("client disconnected: notifying threads") + self.done = True + for thread in self.handler_threads: + logging.info("waiting for thread: %s", thread.getName()) + thread.join(timeout) + if thread.is_alive(): + logging.warning( + "joining %s failed: still alive after %s sec", + thread.getName(), + timeout, + ) + + logging.info("connection closed: %s", self.client_address) + if self.session: + # remove client from session broker and shutdown if there are no clients + self.remove_session_handlers() + clients = self.session_clients[self.session.id] + clients.remove(self) + if not clients and not self.session.is_active(): + logging.info( + "no session clients left and not active, initiating shutdown" + ) + self.coreemu.delete_session(self.session.id) + + return socketserver.BaseRequestHandler.finish(self) + + def session_message(self, flags=0): + """ + Build CORE API Sessions message based on current session info. + + :param int flags: message flags + :return: session message + """ + id_list = [] + name_list = [] + file_list = [] + node_count_list = [] + date_list = [] + thumb_list = [] + num_sessions = 0 + + with self._sessions_lock: + for _id in self.coreemu.sessions: + session = self.coreemu.sessions[_id] + num_sessions += 1 + id_list.append(str(_id)) + + name = session.name + if not name: + name = "" + name_list.append(name) + + file_name = session.file_name + if not file_name: + file_name = "" + file_list.append(file_name) + + node_count_list.append(str(session.get_node_count())) + + date_list.append(time.ctime(session.state_time)) + + thumb = session.thumbnail + if not thumb: + thumb = "" + thumb_list.append(thumb) + + session_ids = "|".join(id_list) + names = "|".join(name_list) + files = "|".join(file_list) + node_counts = "|".join(node_count_list) + dates = "|".join(date_list) + thumbs = "|".join(thumb_list) + + if num_sessions > 0: + tlv_data = b"" + if len(session_ids) > 0: + tlv_data += coreapi.CoreSessionTlv.pack( + SessionTlvs.NUMBER.value, session_ids + ) + if len(names) > 0: + tlv_data += coreapi.CoreSessionTlv.pack(SessionTlvs.NAME.value, names) + if len(files) > 0: + tlv_data += coreapi.CoreSessionTlv.pack(SessionTlvs.FILE.value, files) + if len(node_counts) > 0: + tlv_data += coreapi.CoreSessionTlv.pack( + SessionTlvs.NODE_COUNT.value, node_counts + ) + if len(dates) > 0: + tlv_data += coreapi.CoreSessionTlv.pack(SessionTlvs.DATE.value, dates) + if len(thumbs) > 0: + tlv_data += coreapi.CoreSessionTlv.pack(SessionTlvs.THUMB.value, thumbs) + message = coreapi.CoreSessionMessage.pack(flags, tlv_data) + else: + message = None + + return message + + def handle_broadcast_event(self, event_data): + """ + Callback to handle an event broadcast out from a session. + + :param core.emulator.data.EventData event_data: event data to handle + :return: nothing + """ + logging.debug("handling broadcast event: %s", event_data) + + tlv_data = structutils.pack_values( + coreapi.CoreEventTlv, + [ + (EventTlvs.NODE, event_data.node), + (EventTlvs.TYPE, event_data.event_type.value), + (EventTlvs.NAME, event_data.name), + (EventTlvs.DATA, event_data.data), + (EventTlvs.TIME, event_data.time), + (EventTlvs.SESSION, event_data.session), + ], + ) + message = coreapi.CoreEventMessage.pack(0, tlv_data) + + try: + self.sendall(message) + except IOError: + logging.exception("error sending event message") + + def handle_broadcast_file(self, file_data): + """ + Callback to handle a file broadcast out from a session. + + :param core.emulator.data.FileData file_data: file data to handle + :return: nothing + """ + logging.debug("handling broadcast file: %s", file_data) + + tlv_data = structutils.pack_values( + coreapi.CoreFileTlv, + [ + (FileTlvs.NODE, file_data.node), + (FileTlvs.NAME, file_data.name), + (FileTlvs.MODE, file_data.mode), + (FileTlvs.NUMBER, file_data.number), + (FileTlvs.TYPE, file_data.type), + (FileTlvs.SOURCE_NAME, file_data.source), + (FileTlvs.SESSION, file_data.session), + (FileTlvs.DATA, file_data.data), + (FileTlvs.COMPRESSED_DATA, file_data.compressed_data), + ], + ) + message = coreapi.CoreFileMessage.pack(file_data.message_type.value, tlv_data) + + try: + self.sendall(message) + except IOError: + logging.exception("error sending file message") + + def handle_broadcast_config(self, config_data): + """ + Callback to handle a config broadcast out from a session. + + :param core.emulator.data.ConfigData config_data: config data to handle + :return: nothing + """ + logging.debug("handling broadcast config: %s", config_data) + message = dataconversion.convert_config(config_data) + try: + self.sendall(message) + except IOError: + logging.exception("error sending config message") + + def handle_broadcast_exception(self, exception_data): + """ + Callback to handle an exception broadcast out from a session. + + :param core.emulator.data.ExceptionData exception_data: exception data to handle + :return: nothing + """ + logging.debug("handling broadcast exception: %s", exception_data) + tlv_data = structutils.pack_values( + coreapi.CoreExceptionTlv, + [ + (ExceptionTlvs.NODE, exception_data.node), + (ExceptionTlvs.SESSION, str(exception_data.session)), + (ExceptionTlvs.LEVEL, exception_data.level.value), + (ExceptionTlvs.SOURCE, exception_data.source), + (ExceptionTlvs.DATE, exception_data.date), + (ExceptionTlvs.TEXT, exception_data.text), + ], + ) + message = coreapi.CoreExceptionMessage.pack(0, tlv_data) + + try: + self.sendall(message) + except IOError: + logging.exception("error sending exception message") + + def handle_broadcast_node(self, node_data): + """ + Callback to handle an node broadcast out from a session. + + :param core.emulator.data.NodeData node_data: node data to handle + :return: nothing + """ + logging.debug("handling broadcast node: %s", node_data) + message = dataconversion.convert_node(node_data) + try: + self.sendall(message) + except IOError: + logging.exception("error sending node message") + + def handle_broadcast_link(self, link_data): + """ + Callback to handle an link broadcast out from a session. + + :param core.emulator.data.LinkData link_data: link data to handle + :return: nothing + """ + logging.debug("handling broadcast link: %s", link_data) + options_data = link_data.options + loss = "" + if options_data.loss is not None: + loss = str(options_data.loss) + dup = "" + if options_data.dup is not None: + dup = str(options_data.dup) + iface1 = link_data.iface1 + if iface1 is None: + iface1 = InterfaceData() + iface2 = link_data.iface2 + if iface2 is None: + iface2 = InterfaceData() + + tlv_data = structutils.pack_values( + coreapi.CoreLinkTlv, + [ + (LinkTlvs.N1_NUMBER, link_data.node1_id), + (LinkTlvs.N2_NUMBER, link_data.node2_id), + (LinkTlvs.DELAY, options_data.delay), + (LinkTlvs.BANDWIDTH, options_data.bandwidth), + (LinkTlvs.LOSS, loss), + (LinkTlvs.DUP, dup), + (LinkTlvs.JITTER, options_data.jitter), + (LinkTlvs.MER, options_data.mer), + (LinkTlvs.BURST, options_data.burst), + (LinkTlvs.MBURST, options_data.mburst), + (LinkTlvs.TYPE, link_data.type.value), + (LinkTlvs.UNIDIRECTIONAL, options_data.unidirectional), + (LinkTlvs.NETWORK_ID, link_data.network_id), + (LinkTlvs.KEY, options_data.key), + (LinkTlvs.IFACE1_NUMBER, iface1.id), + (LinkTlvs.IFACE1_IP4, iface1.ip4), + (LinkTlvs.IFACE1_IP4_MASK, iface1.ip4_mask), + (LinkTlvs.IFACE1_MAC, iface1.mac), + (LinkTlvs.IFACE1_IP6, iface1.ip6), + (LinkTlvs.IFACE1_IP6_MASK, iface1.ip6_mask), + (LinkTlvs.IFACE2_NUMBER, iface2.id), + (LinkTlvs.IFACE2_IP4, iface2.ip4), + (LinkTlvs.IFACE2_IP4_MASK, iface2.ip4_mask), + (LinkTlvs.IFACE2_MAC, iface2.mac), + (LinkTlvs.IFACE2_IP6, iface2.ip6), + (LinkTlvs.IFACE2_IP6_MASK, iface2.ip6_mask), + ], + ) + + message = coreapi.CoreLinkMessage.pack(link_data.message_type.value, tlv_data) + + try: + self.sendall(message) + except IOError: + logging.exception("error sending Event Message") + + def register(self): + """ + Return a Register Message + + :return: register message data + """ + logging.info( + "GUI has connected to session %d at %s", self.session.id, time.ctime() + ) + tlv_data = b"" + tlv_data += coreapi.CoreRegisterTlv.pack( + RegisterTlvs.EXECUTE_SERVER.value, "core-daemon" + ) + tlv_data += coreapi.CoreRegisterTlv.pack( + RegisterTlvs.EMULATION_SERVER.value, "core-daemon" + ) + tlv_data += coreapi.CoreRegisterTlv.pack(RegisterTlvs.UTILITY.value, "broker") + tlv_data += coreapi.CoreRegisterTlv.pack( + self.session.location.config_type.value, self.session.location.name + ) + tlv_data += coreapi.CoreRegisterTlv.pack( + self.session.mobility.config_type.value, self.session.mobility.name + ) + for model_name in self.session.mobility.models: + model_class = self.session.mobility.models[model_name] + tlv_data += coreapi.CoreRegisterTlv.pack( + model_class.config_type.value, model_class.name + ) + tlv_data += coreapi.CoreRegisterTlv.pack( + self.session.services.config_type.value, self.session.services.name + ) + tlv_data += coreapi.CoreRegisterTlv.pack( + self.session.emane.config_type.value, self.session.emane.name + ) + for model_name in self.session.emane.models: + model_class = self.session.emane.models[model_name] + tlv_data += coreapi.CoreRegisterTlv.pack( + model_class.config_type.value, model_class.name + ) + tlv_data += coreapi.CoreRegisterTlv.pack( + self.session.options.config_type.value, self.session.options.name + ) + tlv_data += coreapi.CoreRegisterTlv.pack(RegisterTlvs.UTILITY.value, "metadata") + + return coreapi.CoreRegMessage.pack(MessageFlags.ADD.value, tlv_data) + + def sendall(self, data): + """ + Send raw data to the other end of this TCP connection + using socket"s sendall(). + + :param data: data to send over request socket + :return: data sent + """ + return self.request.sendall(data) + + def receive_message(self): + """ + Receive data and return a CORE API message object. + + :return: received message + :rtype: core.api.tlv.coreapi.CoreMessage + """ + try: + header = self.request.recv(coreapi.CoreMessage.header_len) + except IOError as e: + raise IOError(f"error receiving header ({e})") + + if len(header) != coreapi.CoreMessage.header_len: + if len(header) == 0: + raise EOFError("client disconnected") + else: + raise IOError("invalid message header size") + + message_type, message_flags, message_len = coreapi.CoreMessage.unpack_header( + header + ) + if message_len == 0: + logging.warning("received message with no data") + + data = b"" + while len(data) < message_len: + data += self.request.recv(message_len - len(data)) + if len(data) > message_len: + error_message = f"received message length does not match received data ({len(data)} != {message_len})" + logging.error(error_message) + raise IOError(error_message) + + try: + message_class = coreapi.CLASS_MAP[message_type] + message = message_class(message_flags, header, data) + except KeyError: + message = coreapi.CoreMessage(message_flags, header, data) + message.message_type = message_type + logging.exception("unimplemented core message type: %s", message.type_str()) + + return message + + def queue_message(self, message): + """ + Queue an API message for later processing. + + :param message: message to queue + :return: nothing + """ + logging.debug( + "queueing msg (queuedtimes = %s): type %s", + message.queuedtimes, + MessageTypes(message.message_type), + ) + self.message_queue.put(message) + + def handler_thread(self): + """ + CORE API message handling loop that is spawned for each server + thread; get CORE API messages from the incoming message queue, + and call handlemsg() for processing. + + :return: nothing + """ + while not self.done: + try: + message = self.message_queue.get(timeout=1) + self.handle_message(message) + except Empty: + pass + + def handle_message(self, message): + """ + Handle an incoming message; dispatch based on message type, + optionally sending replies. + + :param message: message to handle + :return: nothing + """ + logging.debug( + "%s handling message:\n%s", threading.currentThread().getName(), message + ) + if message.message_type not in self.message_handlers: + logging.error("no handler for message type: %s", message.type_str()) + return + + message_handler = self.message_handlers[message.message_type] + try: + # TODO: this needs to be removed, make use of the broadcast message methods + replies = message_handler(message) + self.dispatch_replies(replies, message) + except Exception as e: + self.send_exception(ExceptionLevels.ERROR, "corehandler", str(e)) + logging.exception( + "%s: exception while handling message: %s", + threading.currentThread().getName(), + message, + ) + + def dispatch_replies(self, replies, message): + """ + Dispatch replies by CORE to message msg previously received from the client. + + :param list replies: reply messages to dispatch + :param message: message for replies + :return: nothing + """ + for reply in replies: + message_type, message_flags, message_length = coreapi.CoreMessage.unpack_header( + reply + ) + try: + reply_message = coreapi.CLASS_MAP[message_type]( + message_flags, + reply[: coreapi.CoreMessage.header_len], + reply[coreapi.CoreMessage.header_len :], + ) + except KeyError: + # multiple TLVs of same type cause KeyError exception + reply_message = f"CoreMessage (type {message_type} flags {message_flags} length {message_length})" + + logging.debug("sending reply:\n%s", reply_message) + + try: + self.sendall(reply) + except IOError: + logging.exception("error dispatching reply") + + def handle(self): + """ + Handle a new connection request from a client. Dispatch to the + recvmsg() method for receiving data into CORE API messages, and + add them to an incoming message queue. + + :return: nothing + """ + # use port as session id + port = self.request.getpeername()[1] + + # TODO: add shutdown handler for session + self.session = self.coreemu.create_session(port) + logging.debug("created new session for client: %s", self.session.id) + clients = self.session_clients.setdefault(self.session.id, []) + clients.append(self) + + # add handlers for various data + self.add_session_handlers() + + # set initial session state + self.session.set_state(EventTypes.DEFINITION_STATE) + + while True: + try: + message = self.receive_message() + except EOFError: + logging.info("client disconnected") + break + except IOError: + logging.exception("error receiving message") + break + + message.queuedtimes = 0 + self.queue_message(message) + + # delay is required for brief connections, allow session joining + if message.message_type == MessageTypes.SESSION.value: + time.sleep(0.125) + + # broadcast node/link messages to other connected clients + if message.message_type not in [ + MessageTypes.NODE.value, + MessageTypes.LINK.value, + ]: + continue + + clients = self.session_clients[self.session.id] + for client in clients: + if client == self: + continue + + logging.debug("BROADCAST TO OTHER CLIENT: %s", client) + client.sendall(message.raw_message) + + def send_exception(self, level, source, text, node=None): + """ + Sends an exception for display within the GUI. + + :param core.emulator.enumerations.ExceptionLevel level: level for exception + :param str source: source where exception came from + :param str text: details about exception + :param int node: node id, if related to a specific node + :return: nothing + """ + exception_data = ExceptionData( + session=self.session.id, + node=node, + date=time.ctime(), + level=level, + source=source, + text=text, + ) + self.handle_broadcast_exception(exception_data) + + def add_session_handlers(self): + logging.debug("adding session broadcast handlers") + self.session.event_handlers.append(self.handle_broadcast_event) + self.session.exception_handlers.append(self.handle_broadcast_exception) + self.session.node_handlers.append(self.handle_broadcast_node) + self.session.link_handlers.append(self.handle_broadcast_link) + self.session.file_handlers.append(self.handle_broadcast_file) + self.session.config_handlers.append(self.handle_broadcast_config) + + def remove_session_handlers(self): + logging.debug("removing session broadcast handlers") + self.session.event_handlers.remove(self.handle_broadcast_event) + self.session.exception_handlers.remove(self.handle_broadcast_exception) + self.session.node_handlers.remove(self.handle_broadcast_node) + self.session.link_handlers.remove(self.handle_broadcast_link) + self.session.file_handlers.remove(self.handle_broadcast_file) + self.session.config_handlers.remove(self.handle_broadcast_config) + + def handle_node_message(self, message): + """ + Node Message handler + + :param core.api.tlv.coreapi.CoreNodeMessage message: node message + :return: replies to node message + """ + replies = [] + if ( + message.flags & MessageFlags.ADD.value + and message.flags & MessageFlags.DELETE.value + ): + logging.warning("ignoring invalid message: add and delete flag both set") + return () + + _class = CoreNode + node_type_value = message.get_tlv(NodeTlvs.TYPE.value) + if node_type_value is not None: + node_type = NodeTypes(node_type_value) + _class = self.session.get_node_class(node_type) + + node_id = message.get_tlv(NodeTlvs.NUMBER.value) + + options = NodeOptions( + name=message.get_tlv(NodeTlvs.NAME.value), + model=message.get_tlv(NodeTlvs.MODEL.value), + ) + + options.set_position( + x=message.get_tlv(NodeTlvs.X_POSITION.value), + y=message.get_tlv(NodeTlvs.Y_POSITION.value), + ) + + lat = message.get_tlv(NodeTlvs.LATITUDE.value) + if lat is not None: + lat = float(lat) + lon = message.get_tlv(NodeTlvs.LONGITUDE.value) + if lon is not None: + lon = float(lon) + alt = message.get_tlv(NodeTlvs.ALTITUDE.value) + if alt is not None: + alt = float(alt) + options.set_location(lat=lat, lon=lon, alt=alt) + + options.icon = message.get_tlv(NodeTlvs.ICON.value) + options.canvas = message.get_tlv(NodeTlvs.CANVAS.value) + options.server = message.get_tlv(NodeTlvs.EMULATION_SERVER.value) + + services = message.get_tlv(NodeTlvs.SERVICES.value) + if services: + options.services = services.split("|") + + if message.flags & MessageFlags.ADD.value: + node = self.session.add_node(_class, node_id, options) + if node: + if message.flags & MessageFlags.STRING.value: + self.node_status_request[node.id] = True + + if self.session.state == EventTypes.RUNTIME_STATE: + self.send_node_emulation_id(node.id) + elif message.flags & MessageFlags.DELETE.value: + with self._shutdown_lock: + result = self.session.delete_node(node_id) + if result and self.session.get_node_count() == 0: + self.session.set_state(EventTypes.SHUTDOWN_STATE) + self.session.delete_nodes() + self.session.distributed.shutdown() + self.session.sdt.shutdown() + + # if we deleted a node broadcast out its removal + if result and message.flags & MessageFlags.STRING.value: + tlvdata = b"" + tlvdata += coreapi.CoreNodeTlv.pack(NodeTlvs.NUMBER.value, node_id) + flags = MessageFlags.DELETE.value | MessageFlags.LOCAL.value + replies.append(coreapi.CoreNodeMessage.pack(flags, tlvdata)) + # node update + else: + self.session.edit_node(node_id, options) + + return replies + + def handle_link_message(self, message): + """ + Link Message handler + + :param core.api.tlv.coreapi.CoreLinkMessage message: link message to handle + :return: link message replies + """ + node1_id = message.get_tlv(LinkTlvs.N1_NUMBER.value) + node2_id = message.get_tlv(LinkTlvs.N2_NUMBER.value) + iface1_data = InterfaceData( + id=message.get_tlv(LinkTlvs.IFACE1_NUMBER.value), + name=message.get_tlv(LinkTlvs.IFACE1_NAME.value), + mac=message.get_tlv(LinkTlvs.IFACE1_MAC.value), + ip4=message.get_tlv(LinkTlvs.IFACE1_IP4.value), + ip4_mask=message.get_tlv(LinkTlvs.IFACE1_IP4_MASK.value), + ip6=message.get_tlv(LinkTlvs.IFACE1_IP6.value), + ip6_mask=message.get_tlv(LinkTlvs.IFACE1_IP6_MASK.value), + ) + iface2_data = InterfaceData( + id=message.get_tlv(LinkTlvs.IFACE2_NUMBER.value), + name=message.get_tlv(LinkTlvs.IFACE2_NAME.value), + mac=message.get_tlv(LinkTlvs.IFACE2_MAC.value), + ip4=message.get_tlv(LinkTlvs.IFACE2_IP4.value), + ip4_mask=message.get_tlv(LinkTlvs.IFACE2_IP4_MASK.value), + ip6=message.get_tlv(LinkTlvs.IFACE2_IP6.value), + ip6_mask=message.get_tlv(LinkTlvs.IFACE2_IP6_MASK.value), + ) + link_type = LinkTypes.WIRED + link_type_value = message.get_tlv(LinkTlvs.TYPE.value) + if link_type_value is not None: + link_type = LinkTypes(link_type_value) + options = LinkOptions() + options.delay = message.get_tlv(LinkTlvs.DELAY.value) + options.bandwidth = message.get_tlv(LinkTlvs.BANDWIDTH.value) + options.jitter = message.get_tlv(LinkTlvs.JITTER.value) + options.mer = message.get_tlv(LinkTlvs.MER.value) + options.burst = message.get_tlv(LinkTlvs.BURST.value) + options.mburst = message.get_tlv(LinkTlvs.MBURST.value) + options.unidirectional = message.get_tlv(LinkTlvs.UNIDIRECTIONAL.value) + options.key = message.get_tlv(LinkTlvs.KEY.value) + loss = message.get_tlv(LinkTlvs.LOSS.value) + dup = message.get_tlv(LinkTlvs.DUP.value) + if loss is not None: + options.loss = float(loss) + if dup is not None: + options.dup = int(dup) + + if message.flags & MessageFlags.ADD.value: + self.session.add_link( + node1_id, node2_id, iface1_data, iface2_data, options, link_type + ) + elif message.flags & MessageFlags.DELETE.value: + node1 = self.session.get_node(node1_id, NodeBase) + node2 = self.session.get_node(node2_id, NodeBase) + if isinstance(node1, Rj45Node): + iface1_data.id = node1.iface_id + if isinstance(node2, Rj45Node): + iface2_data.id = node2.iface_id + self.session.delete_link( + node1_id, node2_id, iface1_data.id, iface2_data.id, link_type + ) + else: + self.session.update_link( + node1_id, node2_id, iface1_data.id, iface2_data.id, options, link_type + ) + return () + + def handle_execute_message(self, message): + """ + Execute Message handler + + :param core.api.tlv.coreapi.CoreExecMessage message: execute message to handle + :return: reply messages + """ + node_id = message.get_tlv(ExecuteTlvs.NODE.value) + execute_num = message.get_tlv(ExecuteTlvs.NUMBER.value) + execute_time = message.get_tlv(ExecuteTlvs.TIME.value) + command = message.get_tlv(ExecuteTlvs.COMMAND.value) + + # local flag indicates command executed locally, not on a node + if node_id is None and not message.flags & MessageFlags.LOCAL.value: + raise ValueError("Execute Message is missing node number.") + + if execute_num is None: + raise ValueError("Execute Message is missing execution number.") + + if execute_time is not None: + self.session.add_event( + float(execute_time), node_id=node_id, name=None, data=command + ) + return () + + try: + node = self.session.get_node(node_id, CoreNodeBase) + + # build common TLV items for reply + tlv_data = b"" + if node_id is not None: + tlv_data += coreapi.CoreExecuteTlv.pack(ExecuteTlvs.NODE.value, node_id) + tlv_data += coreapi.CoreExecuteTlv.pack( + ExecuteTlvs.NUMBER.value, execute_num + ) + tlv_data += coreapi.CoreExecuteTlv.pack(ExecuteTlvs.COMMAND.value, command) + + if message.flags & MessageFlags.TTY.value: + if node_id is None: + raise NotImplementedError + # echo back exec message with cmd for spawning interactive terminal + if command == "bash": + command = "/bin/bash" + res = node.termcmdstring(command) + tlv_data += coreapi.CoreExecuteTlv.pack(ExecuteTlvs.RESULT.value, res) + reply = coreapi.CoreExecMessage.pack(MessageFlags.TTY.value, tlv_data) + return (reply,) + else: + # execute command and send a response + if ( + message.flags & MessageFlags.STRING.value + or message.flags & MessageFlags.TEXT.value + ): + if message.flags & MessageFlags.LOCAL.value: + try: + res = utils.cmd(command) + status = 0 + except CoreCommandError as e: + res = e.stderr + status = e.returncode + else: + try: + res = node.cmd(command) + status = 0 + except CoreCommandError as e: + res = e.stderr + status = e.returncode + if message.flags & MessageFlags.TEXT.value: + tlv_data += coreapi.CoreExecuteTlv.pack( + ExecuteTlvs.RESULT.value, res + ) + if message.flags & MessageFlags.STRING.value: + tlv_data += coreapi.CoreExecuteTlv.pack( + ExecuteTlvs.STATUS.value, status + ) + reply = coreapi.CoreExecMessage.pack(0, tlv_data) + return (reply,) + # execute the command with no response + else: + if message.flags & MessageFlags.LOCAL.value: + utils.mute_detach(command) + else: + node.cmd(command, wait=False) + except CoreError: + logging.exception("error getting object: %s", node_id) + # XXX wait and queue this message to try again later + # XXX maybe this should be done differently + if not message.flags & MessageFlags.LOCAL.value: + time.sleep(0.125) + self.queue_message(message) + + return () + + def handle_register_message(self, message): + """ + Register Message Handler + + :param core.api.tlv.coreapi.CoreRegMessage message: register message to handle + :return: reply messages + """ + replies = [] + + # execute a Python script or XML file + execute_server = message.get_tlv(RegisterTlvs.EXECUTE_SERVER.value) + if execute_server: + try: + logging.info("executing: %s", execute_server) + if message.flags & MessageFlags.STRING.value: + old_session_ids = set(self.coreemu.sessions.keys()) + sys.argv = shlex.split(execute_server) + file_name = sys.argv[0] + + if os.path.splitext(file_name)[1].lower() == ".xml": + session = self.coreemu.create_session() + try: + session.open_xml(file_name) + except Exception: + self.coreemu.delete_session(session.id) + raise + else: + thread = threading.Thread( + target=utils.execute_file, + args=( + file_name, + {"__file__": file_name, "coreemu": self.coreemu}, + ), + daemon=True, + ) + thread.start() + thread.join() + + if message.flags & MessageFlags.STRING.value: + new_session_ids = set(self.coreemu.sessions.keys()) + new_sid = new_session_ids.difference(old_session_ids) + try: + sid = new_sid.pop() + logging.info("executed: %s as session %d", execute_server, sid) + except KeyError: + logging.info( + "executed %s with unknown session ID", execute_server + ) + return replies + + logging.debug("checking session %d for RUNTIME state", sid) + session = self.coreemu.sessions.get(sid) + retries = 10 + # wait for session to enter RUNTIME state, to prevent GUI from + # connecting while nodes are still being instantiated + while session.state != EventTypes.RUNTIME_STATE: + logging.debug( + "waiting for session %d to enter RUNTIME state", sid + ) + time.sleep(1) + retries -= 1 + if retries <= 0: + logging.debug("session %d did not enter RUNTIME state", sid) + return replies + + tlv_data = coreapi.CoreRegisterTlv.pack( + RegisterTlvs.EXECUTE_SERVER.value, execute_server + ) + tlv_data += coreapi.CoreRegisterTlv.pack( + RegisterTlvs.SESSION.value, str(sid) + ) + message = coreapi.CoreRegMessage.pack(0, tlv_data) + replies.append(message) + except Exception as e: + logging.exception("error executing: %s", execute_server) + tlv_data = coreapi.CoreExceptionTlv.pack(ExceptionTlvs.LEVEL.value, 2) + tlv_data += coreapi.CoreExceptionTlv.pack( + ExceptionTlvs.TEXT.value, str(e) + ) + message = coreapi.CoreExceptionMessage.pack(0, tlv_data) + replies.append(message) + + return replies + + gui = message.get_tlv(RegisterTlvs.GUI.value) + if gui is None: + logging.debug("ignoring Register message") + else: + # register capabilities with the GUI + replies.append(self.register()) + replies.append(self.session_message()) + + return replies + + def handle_config_message(self, message): + """ + Configuration Message handler + + :param core.api.tlv.coreapi.CoreConfMessage message: configuration message to handle + :return: reply messages + """ + # convert config message to standard config data object + config_data = ConfigData( + node=message.get_tlv(ConfigTlvs.NODE.value), + object=message.get_tlv(ConfigTlvs.OBJECT.value), + type=message.get_tlv(ConfigTlvs.TYPE.value), + data_types=message.get_tlv(ConfigTlvs.DATA_TYPES.value), + data_values=message.get_tlv(ConfigTlvs.VALUES.value), + captions=message.get_tlv(ConfigTlvs.CAPTIONS.value), + bitmap=message.get_tlv(ConfigTlvs.BITMAP.value), + possible_values=message.get_tlv(ConfigTlvs.POSSIBLE_VALUES.value), + groups=message.get_tlv(ConfigTlvs.GROUPS.value), + session=message.get_tlv(ConfigTlvs.SESSION.value), + iface_id=message.get_tlv(ConfigTlvs.IFACE_ID.value), + network_id=message.get_tlv(ConfigTlvs.NETWORK_ID.value), + opaque=message.get_tlv(ConfigTlvs.OPAQUE.value), + ) + logging.debug( + "configuration message for %s node %s", config_data.object, config_data.node + ) + message_type = ConfigFlags(config_data.type) + + replies = [] + + # handle session configuration + if config_data.object == "all": + replies = self.handle_config_all(message_type, config_data) + elif config_data.object == self.session.options.name: + replies = self.handle_config_session(message_type, config_data) + elif config_data.object == self.session.location.name: + self.handle_config_location(message_type, config_data) + elif config_data.object == "metadata": + replies = self.handle_config_metadata(message_type, config_data) + elif config_data.object == "broker": + self.handle_config_broker(message_type, config_data) + elif config_data.object == self.session.services.name: + replies = self.handle_config_services(message_type, config_data) + elif config_data.object == self.session.mobility.name: + self.handle_config_mobility(message_type, config_data) + elif config_data.object in self.session.mobility.models: + replies = self.handle_config_mobility_models(message_type, config_data) + elif config_data.object == self.session.emane.name: + replies = self.handle_config_emane(message_type, config_data) + elif config_data.object in self.session.emane.models: + replies = self.handle_config_emane_models(message_type, config_data) + else: + raise Exception("no handler for configuration: %s", config_data.object) + + for reply in replies: + self.handle_broadcast_config(reply) + + return [] + + def handle_config_all(self, message_type, config_data): + replies = [] + + if message_type == ConfigFlags.RESET: + node_id = config_data.node + if node_id is not None: + self.session.mobility.config_reset(node_id) + self.session.emane.config_reset(node_id) + else: + self.session.location.reset() + self.session.services.reset() + self.session.mobility.config_reset() + self.session.emane.config_reset() + else: + raise Exception(f"cant handle config all: {message_type}") + + return replies + + def handle_config_session(self, message_type, config_data): + replies = [] + if message_type == ConfigFlags.REQUEST: + type_flags = ConfigFlags.NONE.value + config = self.session.options.get_configs() + config_response = ConfigShim.config_data( + 0, None, type_flags, self.session.options, config + ) + replies.append(config_response) + elif message_type != ConfigFlags.RESET and config_data.data_values: + values = ConfigShim.str_to_dict(config_data.data_values) + for key in values: + value = values[key] + self.session.options.set_config(key, value) + return replies + + def handle_config_location(self, message_type, config_data): + if message_type == ConfigFlags.RESET: + self.session.location.reset() + else: + if not config_data.data_values: + logging.warning("location data missing") + else: + values = [float(x) for x in config_data.data_values.split("|")] + + # Cartesian coordinate reference point + refx, refy = values[0], values[1] + refz = 0.0 + lat, lon, alt = values[2], values[3], values[4] + # xyz point + self.session.location.refxyz = (refx, refy, refz) + # geographic reference point + self.session.location.setrefgeo(lat, lon, alt) + self.session.location.refscale = values[5] + logging.info( + "location configured: %s = %s scale=%s", + self.session.location.refxyz, + self.session.location.refgeo, + self.session.location.refscale, + ) + + def handle_config_metadata(self, message_type, config_data): + replies = [] + if message_type == ConfigFlags.REQUEST: + node_id = config_data.node + metadata_configs = self.session.metadata + if metadata_configs is None: + metadata_configs = {} + data_values = "|".join( + [f"{x}={metadata_configs[x]}" for x in metadata_configs] + ) + data_types = tuple(ConfigDataTypes.STRING.value for _ in metadata_configs) + config_response = ConfigData( + message_type=0, + node=node_id, + object="metadata", + type=ConfigFlags.NONE.value, + data_types=data_types, + data_values=data_values, + ) + replies.append(config_response) + elif message_type != ConfigFlags.RESET and config_data.data_values: + values = ConfigShim.str_to_dict(config_data.data_values) + for key in values: + value = values[key] + self.session.metadata[key] = value + return replies + + def handle_config_broker(self, message_type, config_data): + if message_type not in [ConfigFlags.REQUEST, ConfigFlags.RESET]: + if not config_data.data_values: + logging.info("emulation server data missing") + else: + values = config_data.data_values.split("|") + + # string of "server:ip:port,server:ip:port,..." + server_strings = values[0] + server_list = server_strings.split(",") + + for server in server_list: + server_items = server.split(":") + name, host, _ = server_items[:3] + self.session.distributed.add_server(name, host) + elif message_type == ConfigFlags.RESET: + self.session.distributed.shutdown() + + def handle_config_services(self, message_type, config_data): + replies = [] + node_id = config_data.node + opaque = config_data.opaque + + if message_type == ConfigFlags.REQUEST: + session_id = config_data.session + opaque = config_data.opaque + + logging.debug( + "configuration request: node(%s) session(%s) opaque(%s)", + node_id, + session_id, + opaque, + ) + + # send back a list of available services + if opaque is None: + type_flag = ConfigFlags.NONE.value + data_types = tuple( + repeat(ConfigDataTypes.BOOL.value, len(ServiceManager.services)) + ) + + # sort groups by name and map services to groups + groups = set() + group_map = {} + for name in ServiceManager.services: + service_name = ServiceManager.services[name] + group = service_name.group + groups.add(group) + group_map.setdefault(group, []).append(service_name) + groups = sorted(groups, key=lambda x: x.lower()) + + # define tlv values in proper order + captions = [] + possible_values = [] + values = [] + group_strings = [] + start_index = 1 + logging.debug("sorted groups: %s", groups) + for group in groups: + services = sorted(group_map[group], key=lambda x: x.name.lower()) + logging.debug("sorted services for group(%s): %s", group, services) + end_index = start_index + len(services) - 1 + group_strings.append(f"{group}:{start_index}-{end_index}") + start_index += len(services) + for service_name in services: + captions.append(service_name.name) + values.append("0") + if service_name.custom_needed: + possible_values.append("1") + else: + possible_values.append("") + + # format for tlv + captions = "|".join(captions) + possible_values = "|".join(possible_values) + values = "|".join(values) + groups = "|".join(group_strings) + # send back the properties for this service + else: + if not node_id: + return replies + + node = self.session.get_node(node_id, CoreNodeBase) + if node is None: + logging.warning( + "request to configure service for unknown node %s", node_id + ) + return replies + + services = ServiceShim.servicesfromopaque(opaque) + if not services: + return replies + + servicesstring = opaque.split(":") + if len(servicesstring) == 3: + # a file request: e.g. "service:zebra:quagga.conf" + file_name = servicesstring[2] + service_name = services[0] + file_data = self.session.services.get_service_file( + node, service_name, file_name + ) + self.session.broadcast_file(file_data) + # short circuit this request early to avoid returning response below + return replies + + # the first service in the list is the one being configured + service_name = services[0] + # send back: + # dirs, configs, startindex, startup, shutdown, metadata, config + type_flag = ConfigFlags.UPDATE.value + data_types = tuple( + repeat(ConfigDataTypes.STRING.value, len(ServiceShim.keys)) + ) + service = self.session.services.get_service( + node_id, service_name, default_service=True + ) + values = ServiceShim.tovaluelist(node, service) + captions = None + possible_values = None + groups = None + + config_response = ConfigData( + message_type=0, + node=node_id, + object=self.session.services.name, + type=type_flag, + data_types=data_types, + data_values=values, + captions=captions, + possible_values=possible_values, + groups=groups, + session=session_id, + opaque=opaque, + ) + replies.append(config_response) + elif message_type == ConfigFlags.RESET: + self.session.services.reset() + else: + data_types = config_data.data_types + values = config_data.data_values + + error_message = "services config message that I don't know how to handle" + if values is None: + logging.error(error_message) + else: + if opaque is None: + values = values.split("|") + # store default services for a node type in self.defaultservices[] + if ( + data_types is None + or data_types[0] != ConfigDataTypes.STRING.value + ): + logging.info(error_message) + return None + key = values.pop(0) + self.session.services.default_services[key] = values + logging.debug("default services for type %s set to %s", key, values) + elif node_id: + services = ServiceShim.servicesfromopaque(opaque) + if services: + service_name = services[0] + + # set custom service for node + self.session.services.set_service(node_id, service_name) + + # set custom values for custom service + service = self.session.services.get_service( + node_id, service_name + ) + if not service: + raise ValueError( + "custom service(%s) for node(%s) does not exist", + service_name, + node_id, + ) + + values = ConfigShim.str_to_dict(values) + for name in values: + value = values[name] + ServiceShim.setvalue(service, name, value) + + return replies + + def handle_config_mobility(self, message_type, _): + if message_type == ConfigFlags.RESET: + self.session.mobility.reset() + + def handle_config_mobility_models(self, message_type, config_data): + replies = [] + node_id = config_data.node + object_name = config_data.object + iface_id = config_data.iface_id + values_str = config_data.data_values + + node_id = utils.iface_config_id(node_id, iface_id) + logging.debug( + "received configure message for %s nodenum: %s", object_name, node_id + ) + if message_type == ConfigFlags.REQUEST: + logging.info("replying to configure request for model: %s", object_name) + typeflags = ConfigFlags.NONE.value + + model_class = self.session.mobility.models.get(object_name) + if not model_class: + logging.warning("model class does not exist: %s", object_name) + return [] + + config = self.session.mobility.get_model_config(node_id, object_name) + config_response = ConfigShim.config_data( + 0, node_id, typeflags, model_class, config + ) + replies.append(config_response) + elif message_type != ConfigFlags.RESET: + # store the configuration values for later use, when the node + if not object_name: + logging.warning("no configuration object for node: %s", node_id) + return [] + + parsed_config = {} + if values_str: + parsed_config = ConfigShim.str_to_dict(values_str) + + self.session.mobility.set_model_config(node_id, object_name, parsed_config) + if self.session.state == EventTypes.RUNTIME_STATE and parsed_config: + try: + node = self.session.get_node(node_id, WlanNode) + if object_name == BasicRangeModel.name: + node.updatemodel(parsed_config) + except CoreError: + logging.error( + "skipping mobility configuration for unknown node: %s", node_id + ) + + return replies + + def handle_config_emane(self, message_type, config_data): + replies = [] + node_id = config_data.node + object_name = config_data.object + iface_id = config_data.iface_id + values_str = config_data.data_values + + node_id = utils.iface_config_id(node_id, iface_id) + logging.debug( + "received configure message for %s nodenum: %s", object_name, node_id + ) + if message_type == ConfigFlags.REQUEST: + logging.info("replying to configure request for %s model", object_name) + typeflags = ConfigFlags.NONE.value + config = self.session.emane.get_configs() + config_response = ConfigShim.config_data( + 0, node_id, typeflags, self.session.emane.emane_config, config + ) + replies.append(config_response) + elif message_type != ConfigFlags.RESET: + if not object_name: + logging.info("no configuration object for node %s", node_id) + return [] + + if values_str: + config = ConfigShim.str_to_dict(values_str) + self.session.emane.set_configs(config) + + return replies + + def handle_config_emane_models(self, message_type, config_data): + replies = [] + node_id = config_data.node + object_name = config_data.object + iface_id = config_data.iface_id + values_str = config_data.data_values + + node_id = utils.iface_config_id(node_id, iface_id) + logging.debug( + "received configure message for %s nodenum: %s", object_name, node_id + ) + if message_type == ConfigFlags.REQUEST: + logging.info("replying to configure request for model: %s", object_name) + typeflags = ConfigFlags.NONE.value + + model_class = self.session.emane.models.get(object_name) + if not model_class: + logging.warning("model class does not exist: %s", object_name) + return [] + + config = self.session.emane.get_model_config(node_id, object_name) + config_response = ConfigShim.config_data( + 0, node_id, typeflags, model_class, config + ) + replies.append(config_response) + elif message_type != ConfigFlags.RESET: + # store the configuration values for later use, when the node + if not object_name: + logging.warning("no configuration object for node: %s", node_id) + return [] + + parsed_config = {} + if values_str: + parsed_config = ConfigShim.str_to_dict(values_str) + + self.session.emane.set_model_config(node_id, object_name, parsed_config) + + return replies + + def handle_file_message(self, message): + """ + File Message handler + + :param core.api.tlv.coreapi.CoreFileMessage message: file message to handle + :return: reply messages + """ + if message.flags & MessageFlags.ADD.value: + node_num = message.get_tlv(FileTlvs.NODE.value) + file_name = message.get_tlv(FileTlvs.NAME.value) + file_type = message.get_tlv(FileTlvs.TYPE.value) + source_name = message.get_tlv(FileTlvs.SOURCE_NAME.value) + data = message.get_tlv(FileTlvs.DATA.value) + compressed_data = message.get_tlv(FileTlvs.COMPRESSED_DATA.value) + + if compressed_data: + logging.warning( + "Compressed file data not implemented for File message." + ) + return () + + if source_name and data: + logging.warning( + "ignoring invalid File message: source and data TLVs are both present" + ) + return () + + # some File Messages store custom files in services, + # prior to node creation + if file_type is not None: + if file_type.startswith("service:"): + _, service_name = file_type.split(":")[:2] + self.session.services.set_service_file( + node_num, service_name, file_name, data + ) + return () + elif file_type.startswith("hook:"): + _, state = file_type.split(":")[:2] + if not state.isdigit(): + logging.error("error setting hook having state '%s'", state) + return () + state = int(state) + state = EventTypes(state) + self.session.add_hook(state, file_name, data, source_name) + return () + + # writing a file to the host + if node_num is None: + if source_name is not None: + shutil.copy2(source_name, file_name) + else: + with open(file_name, "w") as open_file: + open_file.write(data) + return () + + self.session.add_node_file(node_num, source_name, file_name, data) + else: + raise NotImplementedError + + return () + + def handle_iface_message(self, message): + """ + Interface Message handler. + + :param message: interface message to handle + :return: reply messages + """ + logging.info("ignoring Interface message") + return () + + def handle_event_message(self, message): + """ + Event Message handler + + :param core.api.tlv.coreapi.CoreEventMessage message: event message to handle + :return: reply messages + :raises core.CoreError: when event type <= SHUTDOWN_STATE and not a known node id + """ + event_type_value = message.get_tlv(EventTlvs.TYPE.value) + event_type = EventTypes(event_type_value) + event_data = EventData( + node=message.get_tlv(EventTlvs.NODE.value), + event_type=event_type, + name=message.get_tlv(EventTlvs.NAME.value), + data=message.get_tlv(EventTlvs.DATA.value), + time=message.get_tlv(EventTlvs.TIME.value), + session=message.get_tlv(EventTlvs.SESSION.value), + ) + + if event_data.event_type is None: + raise NotImplementedError("Event message missing event type") + node_id = event_data.node + + logging.debug("handling event %s at %s", event_type.name, time.ctime()) + if event_type.value <= EventTypes.SHUTDOWN_STATE.value: + if node_id is not None: + node = self.session.get_node(node_id, NodeBase) + + # configure mobility models for WLAN added during runtime + if event_type == EventTypes.INSTANTIATION_STATE and isinstance( + node, WlanNode + ): + self.session.start_mobility(node_ids=[node.id]) + return () + + logging.warning( + "dropping unhandled event message for node: %s", node.name + ) + return () + self.session.set_state(event_type) + + if event_type == EventTypes.DEFINITION_STATE: + # clear all session objects in order to receive new definitions + self.session.clear() + elif event_type == EventTypes.INSTANTIATION_STATE: + if len(self.handler_threads) > 1: + # TODO: sync handler threads here before continuing + time.sleep(2.0) # XXX + # done receiving node/link configuration, ready to instantiate + self.session.instantiate() + + # after booting nodes attempt to send emulation id for nodes waiting on status + for _id in self.session.nodes: + self.send_node_emulation_id(_id) + elif event_type == EventTypes.RUNTIME_STATE: + logging.warning("Unexpected event message: RUNTIME state received") + elif event_type == EventTypes.DATACOLLECT_STATE: + self.session.data_collect() + elif event_type == EventTypes.SHUTDOWN_STATE: + logging.warning("Unexpected event message: SHUTDOWN state received") + elif event_type in { + EventTypes.START, + EventTypes.STOP, + EventTypes.RESTART, + EventTypes.PAUSE, + EventTypes.RECONFIGURE, + }: + handled = False + name = event_data.name + if name: + # TODO: register system for event message handlers, + # like confobjs + if name.startswith("service:"): + self.handle_service_event(event_data) + handled = True + elif name.startswith("mobility:"): + self.session.mobility_event(event_data) + handled = True + if not handled: + logging.warning( + "unhandled event message: event type %s, name %s ", + event_type.name, + name, + ) + elif event_type == EventTypes.FILE_OPEN: + filename = event_data.name + self.session.open_xml(filename, start=False) + self.send_objects() + return () + elif event_type == EventTypes.FILE_SAVE: + filename = event_data.name + self.session.save_xml(filename) + elif event_type == EventTypes.SCHEDULED: + etime = event_data.time + node_id = event_data.node + name = event_data.name + data = event_data.data + if etime is None: + logging.warning("Event message scheduled event missing start time") + return () + if message.flags & MessageFlags.ADD.value: + self.session.add_event( + float(etime), node_id=node_id, name=name, data=data + ) + else: + raise NotImplementedError + + return () + + def handle_service_event(self, event_data): + """ + Handle an Event Message used to start, stop, restart, or validate + a service on a given node. + + :param core.emulator.enumerations.EventData event_data: event data to handle + :return: nothing + """ + event_type = event_data.event_type + node_id = event_data.node + name = event_data.name + + try: + node = self.session.get_node(node_id, CoreNodeBase) + except CoreError: + logging.warning( + "ignoring event for service '%s', unknown node '%s'", name, node_id + ) + return + + fail = "" + unknown = [] + services = ServiceShim.servicesfromopaque(name) + for service_name in services: + service = self.session.services.get_service( + node_id, service_name, default_service=True + ) + if not service: + unknown.append(service_name) + continue + + if event_type in [EventTypes.STOP, EventTypes.RESTART]: + status = self.session.services.stop_service(node, service) + if status: + fail += f"Stop {service.name}," + if event_type in [EventTypes.START, EventTypes.RESTART]: + status = self.session.services.startup_service(node, service) + if status: + fail += f"Start ({service.name})," + if event_type == EventTypes.PAUSE: + status = self.session.services.validate_service(node, service) + if status: + fail += f"{service.name}," + if event_type == EventTypes.RECONFIGURE: + self.session.services.service_reconfigure(node, service) + + fail_data = "" + if len(fail) > 0: + fail_data += f"Fail:{fail}" + unknown_data = "" + num = len(unknown) + if num > 0: + for u in unknown: + unknown_data += u + if num > 1: + unknown_data += ", " + num -= 1 + logging.warning("Event requested for unknown service(s): %s", unknown_data) + unknown_data = f"Unknown:{unknown_data}" + + event_data = EventData( + node=node_id, + event_type=event_type, + name=name, + data=fail_data + ";" + unknown_data, + time=str(time.monotonic()), + ) + + self.session.broadcast_event(event_data) + + def handle_session_message(self, message): + """ + Session Message handler + + :param core.api.tlv.coreapi.CoreSessionMessage message: session message to handle + :return: reply messages + """ + session_id_str = message.get_tlv(SessionTlvs.NUMBER.value) + session_ids = coreapi.str_to_list(session_id_str) + name_str = message.get_tlv(SessionTlvs.NAME.value) + names = coreapi.str_to_list(name_str) + file_str = message.get_tlv(SessionTlvs.FILE.value) + files = coreapi.str_to_list(file_str) + thumb = message.get_tlv(SessionTlvs.THUMB.value) + user = message.get_tlv(SessionTlvs.USER.value) + logging.debug( + "SESSION message flags=0x%x sessions=%s", message.flags, session_id_str + ) + + if message.flags == 0: + for index, session_id in enumerate(session_ids): + session_id = int(session_id) + if session_id == 0: + session = self.session + else: + session = self.coreemu.sessions.get(session_id) + + if session is None: + logging.warning("session %s not found", session_id) + continue + + if names is not None: + session.name = names[index] + + if files is not None: + session.file_name = files[index] + + if thumb: + session.set_thumbnail(thumb) + + if user: + session.set_user(user) + elif ( + message.flags & MessageFlags.STRING.value + and not message.flags & MessageFlags.ADD.value + ): + # status request flag: send list of sessions + return (self.session_message(),) + else: + # handle ADD or DEL flags + for session_id in session_ids: + session_id = int(session_id) + session = self.coreemu.sessions.get(session_id) + + if session is None: + logging.info( + "session %s not found (flags=0x%x)", session_id, message.flags + ) + continue + + if message.flags & MessageFlags.ADD.value: + # connect to the first session that exists + logging.info("request to connect to session %s", session_id) + + # remove client from session broker and shutdown if needed + self.remove_session_handlers() + clients = self.session_clients[self.session.id] + clients.remove(self) + if not clients and not self.session.is_active(): + self.coreemu.delete_session(self.session.id) + + # set session to join + self.session = session + + # add client to session broker + clients = self.session_clients.setdefault(self.session.id, []) + clients.append(self) + + # add broadcast handlers + logging.info("adding session broadcast handlers") + self.add_session_handlers() + + if user: + self.session.set_user(user) + + if message.flags & MessageFlags.STRING.value: + self.send_objects() + elif message.flags & MessageFlags.DELETE.value: + # shut down the specified session(s) + logging.info("request to terminate session %s", session_id) + self.coreemu.delete_session(session_id) + else: + logging.warning( + "unhandled session flags for session %s", session_id + ) + + return () + + def send_node_emulation_id(self, node_id): + """ + Node emulation id to send. + + :param int node_id: node id to send + :return: nothing + """ + if node_id in self.node_status_request: + tlv_data = b"" + tlv_data += coreapi.CoreNodeTlv.pack(NodeTlvs.NUMBER.value, node_id) + tlv_data += coreapi.CoreNodeTlv.pack(NodeTlvs.EMULATION_ID.value, node_id) + reply = coreapi.CoreNodeMessage.pack( + MessageFlags.ADD.value | MessageFlags.LOCAL.value, tlv_data + ) + + try: + self.sendall(reply) + except IOError: + logging.exception( + "error sending node emulation id message: %s", node_id + ) + + del self.node_status_request[node_id] + + def send_objects(self): + """ + Return API messages that describe the current session. + """ + # find all nodes and links + all_links = [] + with self.session.nodes_lock: + for node_id in self.session.nodes: + node = self.session.nodes[node_id] + self.session.broadcast_node(node, MessageFlags.ADD) + links = node.links(flags=MessageFlags.ADD) + all_links.extend(links) + + for link in all_links: + self.session.broadcast_link(link) + + # send mobility model info + for node_id in self.session.mobility.nodes(): + mobility_configs = self.session.mobility.get_all_configs(node_id) + for model_name in mobility_configs: + config = mobility_configs[model_name] + model_class = self.session.mobility.models[model_name] + logging.debug( + "mobility config: node(%s) class(%s) values(%s)", + node_id, + model_class, + config, + ) + config_data = ConfigShim.config_data( + 0, node_id, ConfigFlags.UPDATE.value, model_class, config + ) + self.session.broadcast_config(config_data) + + # send global emane config + config = self.session.emane.get_configs() + logging.debug("global emane config: values(%s)", config) + config_data = ConfigShim.config_data( + 0, None, ConfigFlags.UPDATE.value, self.session.emane.emane_config, config + ) + self.session.broadcast_config(config_data) + + # send emane model configs + for node_id in self.session.emane.nodes(): + emane_configs = self.session.emane.get_all_configs(node_id) + for model_name in emane_configs: + config = emane_configs[model_name] + model_class = self.session.emane.models[model_name] + logging.debug( + "emane config: node(%s) class(%s) values(%s)", + node_id, + model_class, + config, + ) + config_data = ConfigShim.config_data( + 0, node_id, ConfigFlags.UPDATE.value, model_class, config + ) + self.session.broadcast_config(config_data) + + # service customizations + service_configs = self.session.services.all_configs() + for node_id, service in service_configs: + opaque = f"service:{service.name}" + data_types = tuple( + repeat(ConfigDataTypes.STRING.value, len(ServiceShim.keys)) + ) + node = self.session.get_node(node_id, CoreNodeBase) + values = ServiceShim.tovaluelist(node, service) + config_data = ConfigData( + message_type=0, + node=node_id, + object=self.session.services.name, + type=ConfigFlags.UPDATE.value, + data_types=data_types, + data_values=values, + session=self.session.id, + opaque=opaque, + ) + self.session.broadcast_config(config_data) + + for file_name, config_data in self.session.services.all_files(service): + file_data = FileData( + message_type=MessageFlags.ADD, + node=node_id, + name=str(file_name), + type=opaque, + data=str(config_data), + ) + self.session.broadcast_file(file_data) + + # TODO: send location info + + # send hook scripts + for state in sorted(self.session.hooks): + for file_name, config_data in self.session.hooks[state]: + file_data = FileData( + message_type=MessageFlags.ADD, + name=str(file_name), + type=f"hook:{state.value}", + data=str(config_data), + ) + self.session.broadcast_file(file_data) + + # send session configuration + session_config = self.session.options.get_configs() + config_data = ConfigShim.config_data( + 0, None, ConfigFlags.UPDATE.value, self.session.options, session_config + ) + self.session.broadcast_config(config_data) + + # send session metadata + metadata_configs = self.session.metadata + if metadata_configs: + data_values = "|".join( + [f"{x}={metadata_configs[x]}" for x in metadata_configs] + ) + data_types = tuple( + ConfigDataTypes.STRING.value for _ in self.session.metadata + ) + config_data = ConfigData( + message_type=0, + object="metadata", + type=ConfigFlags.NONE.value, + data_types=data_types, + data_values=data_values, + ) + self.session.broadcast_config(config_data) + + node_count = self.session.get_node_count() + logging.info( + "informed GUI about %d nodes and %d links", node_count, len(all_links) + ) + + +class CoreUdpHandler(CoreHandler): + def __init__(self, request, client_address, server): + self.message_handlers = { + MessageTypes.NODE.value: self.handle_node_message, + MessageTypes.LINK.value: self.handle_link_message, + MessageTypes.EXECUTE.value: self.handle_execute_message, + MessageTypes.REGISTER.value: self.handle_register_message, + MessageTypes.CONFIG.value: self.handle_config_message, + MessageTypes.FILE.value: self.handle_file_message, + MessageTypes.INTERFACE.value: self.handle_iface_message, + MessageTypes.EVENT.value: self.handle_event_message, + MessageTypes.SESSION.value: self.handle_session_message, + } + self.session = None + self.coreemu = server.mainserver.coreemu + self.tcp_handler = server.RequestHandlerClass + socketserver.BaseRequestHandler.__init__(self, request, client_address, server) + + def setup(self): + """ + Client has connected, set up a new connection. + :return: nothing + """ + pass + + def receive_message(self): + data = self.request[0] + header = data[: coreapi.CoreMessage.header_len] + if len(header) < coreapi.CoreMessage.header_len: + raise IOError(f"error receiving header (received {len(header)} bytes)") + + message_type, message_flags, message_len = coreapi.CoreMessage.unpack_header( + header + ) + if message_len == 0: + logging.warning("received message with no data") + return + + if len(data) != coreapi.CoreMessage.header_len + message_len: + logging.error( + "received message length does not match received data (%s != %s)", + len(data), + coreapi.CoreMessage.header_len + message_len, + ) + raise IOError + + try: + message_class = coreapi.CLASS_MAP[message_type] + message = message_class( + message_flags, header, data[coreapi.CoreMessage.header_len :] + ) + return message + except KeyError: + message = coreapi.CoreMessage( + message_flags, header, data[coreapi.CoreMessage.header_len :] + ) + message.msgtype = message_type + logging.exception("unimplemented core message type: %s", message.type_str()) + + def handle(self): + message = self.receive_message() + sessions = message.session_numbers() + message.queuedtimes = 0 + if sessions: + for session_id in sessions: + session = self.server.mainserver.coreemu.sessions.get(session_id) + if session: + logging.debug("session handling message: %s", session.id) + self.session = session + self.handle_message(message) + self.broadcast(message) + else: + logging.error( + "session %d in %s message not found.", + session_id, + message.type_str(), + ) + else: + # no session specified, find an existing one + session = None + node_count = 0 + for session_id in self.server.mainserver.coreemu.sessions: + current_session = self.server.mainserver.coreemu.sessions[session_id] + current_node_count = current_session.get_node_count() + if ( + current_session.state == EventTypes.RUNTIME_STATE + and current_node_count > node_count + ): + node_count = current_node_count + session = current_session + + if session or message.message_type == MessageTypes.REGISTER.value: + self.session = session + self.handle_message(message) + self.broadcast(message) + else: + logging.error( + "no active session, dropping %s message.", message.type_str() + ) + + def broadcast(self, message): + if not isinstance(message, (coreapi.CoreNodeMessage, coreapi.CoreLinkMessage)): + return + + clients = self.tcp_handler.session_clients.get(self.session.id, []) + for client in clients: + try: + client.sendall(message.raw_message) + except IOError: + logging.error("error broadcasting") + + def finish(self): + return socketserver.BaseRequestHandler.finish(self) + + def queuemsg(self, msg): + """ + UDP handlers are short-lived and do not have message queues. + + :param bytes msg: message to queue + :return: + """ + raise Exception( + f"Unable to queue {msg} message for later processing using UDP!" + ) + + def sendall(self, data): + """ + Use sendto() on the connectionless UDP socket. + + :param data: + :return: + """ + self.request[1].sendto(data, self.client_address) diff --git a/daemon/core/api/tlv/coreserver.py b/daemon/core/api/tlv/coreserver.py new file mode 100644 index 00000000..c51e8023 --- /dev/null +++ b/daemon/core/api/tlv/coreserver.py @@ -0,0 +1,60 @@ +""" +Defines core server for handling TCP connections. +""" + +import socketserver + +from core.emulator.coreemu import CoreEmu + + +class CoreServer(socketserver.ThreadingMixIn, socketserver.TCPServer): + """ + TCP server class, manages sessions and spawns request handlers for + incoming connections. + """ + + daemon_threads = True + allow_reuse_address = True + + def __init__(self, server_address, handler_class, config=None): + """ + Server class initialization takes configuration data and calls + the socketserver constructor. + + :param tuple[str, int] server_address: server host and port to use + :param class handler_class: request handler + :param dict config: configuration setting + """ + self.coreemu = CoreEmu(config) + self.config = config + socketserver.TCPServer.__init__(self, server_address, handler_class) + + +class CoreUdpServer(socketserver.ThreadingMixIn, socketserver.UDPServer): + """ + UDP server class, manages sessions and spawns request handlers for + incoming connections. + """ + + daemon_threads = True + allow_reuse_address = True + + def __init__(self, server_address, handler_class, mainserver): + """ + Server class initialization takes configuration data and calls + the SocketServer constructor + + :param server_address: + :param class handler_class: request handler + :param mainserver: + """ + self.mainserver = mainserver + socketserver.UDPServer.__init__(self, server_address, handler_class) + + def start(self): + """ + Thread target to run concurrently with the TCP server. + + :return: nothing + """ + self.serve_forever() diff --git a/daemon/core/api/tlv/dataconversion.py b/daemon/core/api/tlv/dataconversion.py new file mode 100644 index 00000000..8a26300a --- /dev/null +++ b/daemon/core/api/tlv/dataconversion.py @@ -0,0 +1,176 @@ +""" +Converts CORE data objects into legacy API messages. +""" +import logging +from collections import OrderedDict +from typing import Dict, List + +from core.api.tlv import coreapi, structutils +from core.api.tlv.enumerations import ConfigTlvs, NodeTlvs +from core.config import ConfigGroup, ConfigurableOptions +from core.emulator.data import ConfigData, NodeData + + +def convert_node(node_data: NodeData): + """ + Convenience method for converting NodeData to a packed TLV message. + + :param core.emulator.data.NodeData node_data: node data to convert + :return: packed node message + """ + node = node_data.node + services = None + if node.services is not None: + services = "|".join([x.name for x in node.services]) + server = None + if node.server is not None: + server = node.server.name + tlv_data = structutils.pack_values( + coreapi.CoreNodeTlv, + [ + (NodeTlvs.NUMBER, node.id), + (NodeTlvs.TYPE, node.apitype.value), + (NodeTlvs.NAME, node.name), + (NodeTlvs.MODEL, node.type), + (NodeTlvs.EMULATION_SERVER, server), + (NodeTlvs.X_POSITION, int(node.position.x)), + (NodeTlvs.Y_POSITION, int(node.position.y)), + (NodeTlvs.CANVAS, node.canvas), + (NodeTlvs.SERVICES, services), + (NodeTlvs.LATITUDE, str(node.position.lat)), + (NodeTlvs.LONGITUDE, str(node.position.lon)), + (NodeTlvs.ALTITUDE, str(node.position.alt)), + (NodeTlvs.ICON, node.icon), + ], + ) + return coreapi.CoreNodeMessage.pack(node_data.message_type.value, tlv_data) + + +def convert_config(config_data): + """ + Convenience method for converting ConfigData to a packed TLV message. + + :param core.emulator.data.ConfigData config_data: config data to convert + :return: packed message + """ + session = None + if config_data.session is not None: + session = str(config_data.session) + tlv_data = structutils.pack_values( + coreapi.CoreConfigTlv, + [ + (ConfigTlvs.NODE, config_data.node), + (ConfigTlvs.OBJECT, config_data.object), + (ConfigTlvs.TYPE, config_data.type), + (ConfigTlvs.DATA_TYPES, config_data.data_types), + (ConfigTlvs.VALUES, config_data.data_values), + (ConfigTlvs.CAPTIONS, config_data.captions), + (ConfigTlvs.BITMAP, config_data.bitmap), + (ConfigTlvs.POSSIBLE_VALUES, config_data.possible_values), + (ConfigTlvs.GROUPS, config_data.groups), + (ConfigTlvs.SESSION, session), + (ConfigTlvs.IFACE_ID, config_data.iface_id), + (ConfigTlvs.NETWORK_ID, config_data.network_id), + (ConfigTlvs.OPAQUE, config_data.opaque), + ], + ) + return coreapi.CoreConfMessage.pack(config_data.message_type, tlv_data) + + +class ConfigShim: + """ + Provides helper methods for converting newer configuration values into TLV + compatible formats. + """ + + @classmethod + def str_to_dict(cls, key_values: str) -> Dict[str, str]: + """ + Converts a TLV key/value string into an ordered mapping. + + :param key_values: + :return: ordered mapping of key/value pairs + """ + key_values = key_values.split("|") + values = OrderedDict() + for key_value in key_values: + key, value = key_value.split("=", 1) + values[key] = value + return values + + @classmethod + def groups_to_str(cls, config_groups: List[ConfigGroup]) -> str: + """ + Converts configuration groups to a TLV formatted string. + + :param config_groups: configuration groups to format + :return: TLV configuration group string + """ + group_strings = [] + for config_group in config_groups: + group_string = ( + f"{config_group.name}:{config_group.start}-{config_group.stop}" + ) + group_strings.append(group_string) + return "|".join(group_strings) + + @classmethod + def config_data( + cls, + flags: int, + node_id: int, + type_flags: int, + configurable_options: ConfigurableOptions, + config: Dict[str, str], + ) -> ConfigData: + """ + Convert this class to a Config API message. Some TLVs are defined + by the class, but node number, conf type flags, and values must + be passed in. + + :param flags: message flags + :param node_id: node id + :param type_flags: type flags + :param configurable_options: options to create config data for + :param config: configuration values for options + :return: configuration data object + """ + key_values = None + captions = None + data_types = [] + possible_values = [] + logging.debug("configurable: %s", configurable_options) + logging.debug("configuration options: %s", configurable_options.configurations) + logging.debug("configuration data: %s", config) + for configuration in configurable_options.configurations(): + if not captions: + captions = configuration.label + else: + captions += f"|{configuration.label}" + + data_types.append(configuration.type.value) + + options = ",".join(configuration.options) + possible_values.append(options) + + _id = configuration.id + config_value = config.get(_id, configuration.default) + key_value = f"{_id}={config_value}" + if not key_values: + key_values = key_value + else: + key_values += f"|{key_value}" + + groups_str = cls.groups_to_str(configurable_options.config_groups()) + return ConfigData( + message_type=flags, + node=node_id, + object=configurable_options.name, + type=type_flags, + data_types=tuple(data_types), + data_values=key_values, + captions=captions, + possible_values="|".join(possible_values), + bitmap=configurable_options.bitmap, + groups=groups_str, + ) diff --git a/daemon/core/api/tlv/enumerations.py b/daemon/core/api/tlv/enumerations.py new file mode 100644 index 00000000..f2b35703 --- /dev/null +++ b/daemon/core/api/tlv/enumerations.py @@ -0,0 +1,212 @@ +""" +Enumerations specific to the CORE TLV API. +""" +from enum import Enum + +CORE_API_PORT = 4038 + + +class MessageTypes(Enum): + """ + CORE message types. + """ + + NODE = 0x01 + LINK = 0x02 + EXECUTE = 0x03 + REGISTER = 0x04 + CONFIG = 0x05 + FILE = 0x06 + INTERFACE = 0x07 + EVENT = 0x08 + SESSION = 0x09 + EXCEPTION = 0x0A + + +class NodeTlvs(Enum): + """ + Node type, length, value enumerations. + """ + + NUMBER = 0x01 + TYPE = 0x02 + NAME = 0x03 + IP_ADDRESS = 0x04 + MAC_ADDRESS = 0x05 + IP6_ADDRESS = 0x06 + MODEL = 0x07 + EMULATION_SERVER = 0x08 + SESSION = 0x0A + X_POSITION = 0x20 + Y_POSITION = 0x21 + CANVAS = 0x22 + EMULATION_ID = 0x23 + NETWORK_ID = 0x24 + SERVICES = 0x25 + LATITUDE = 0x30 + LONGITUDE = 0x31 + ALTITUDE = 0x32 + ICON = 0x42 + OPAQUE = 0x50 + + +class LinkTlvs(Enum): + """ + Link type, length, value enumerations. + """ + + N1_NUMBER = 0x01 + N2_NUMBER = 0x02 + DELAY = 0x03 + BANDWIDTH = 0x04 + LOSS = 0x05 + DUP = 0x06 + JITTER = 0x07 + MER = 0x08 + BURST = 0x09 + SESSION = 0x0A + MBURST = 0x10 + TYPE = 0x20 + GUI_ATTRIBUTES = 0x21 + UNIDIRECTIONAL = 0x22 + EMULATION_ID = 0x23 + NETWORK_ID = 0x24 + KEY = 0x25 + IFACE1_NUMBER = 0x30 + IFACE1_IP4 = 0x31 + IFACE1_IP4_MASK = 0x32 + IFACE1_MAC = 0x33 + IFACE1_IP6 = 0x34 + IFACE1_IP6_MASK = 0x35 + IFACE2_NUMBER = 0x36 + IFACE2_IP4 = 0x37 + IFACE2_IP4_MASK = 0x38 + IFACE2_MAC = 0x39 + IFACE2_IP6 = 0x40 + IFACE2_IP6_MASK = 0x41 + IFACE1_NAME = 0x42 + IFACE2_NAME = 0x43 + OPAQUE = 0x50 + + +class ExecuteTlvs(Enum): + """ + Execute type, length, value enumerations. + """ + + NODE = 0x01 + NUMBER = 0x02 + TIME = 0x03 + COMMAND = 0x04 + RESULT = 0x05 + STATUS = 0x06 + SESSION = 0x0A + + +class ConfigTlvs(Enum): + """ + Configuration type, length, value enumerations. + """ + + NODE = 0x01 + OBJECT = 0x02 + TYPE = 0x03 + DATA_TYPES = 0x04 + VALUES = 0x05 + CAPTIONS = 0x06 + BITMAP = 0x07 + POSSIBLE_VALUES = 0x08 + GROUPS = 0x09 + SESSION = 0x0A + IFACE_ID = 0x0B + NETWORK_ID = 0x24 + OPAQUE = 0x50 + + +class ConfigFlags(Enum): + """ + Configuration flags. + """ + + NONE = 0x00 + REQUEST = 0x01 + UPDATE = 0x02 + RESET = 0x03 + + +class FileTlvs(Enum): + """ + File type, length, value enumerations. + """ + + NODE = 0x01 + NAME = 0x02 + MODE = 0x03 + NUMBER = 0x04 + TYPE = 0x05 + SOURCE_NAME = 0x06 + SESSION = 0x0A + DATA = 0x10 + COMPRESSED_DATA = 0x11 + + +class InterfaceTlvs(Enum): + """ + Interface type, length, value enumerations. + """ + + NODE = 0x01 + NUMBER = 0x02 + NAME = 0x03 + IP_ADDRESS = 0x04 + MASK = 0x05 + MAC_ADDRESS = 0x06 + IP6_ADDRESS = 0x07 + IP6_MASK = 0x08 + TYPE = 0x09 + SESSION = 0x0A + STATE = 0x0B + EMULATION_ID = 0x23 + NETWORK_ID = 0x24 + + +class EventTlvs(Enum): + """ + Event type, length, value enumerations. + """ + + NODE = 0x01 + TYPE = 0x02 + NAME = 0x03 + DATA = 0x04 + TIME = 0x05 + SESSION = 0x0A + + +class SessionTlvs(Enum): + """ + Session type, length, value enumerations. + """ + + NUMBER = 0x01 + NAME = 0x02 + FILE = 0x03 + NODE_COUNT = 0x04 + DATE = 0x05 + THUMB = 0x06 + USER = 0x07 + OPAQUE = 0x0A + + +class ExceptionTlvs(Enum): + """ + Exception type, length, value enumerations. + """ + + NODE = 0x01 + SESSION = 0x02 + LEVEL = 0x03 + SOURCE = 0x04 + DATE = 0x05 + TEXT = 0x06 + OPAQUE = 0x0A diff --git a/daemon/core/api/tlv/structutils.py b/daemon/core/api/tlv/structutils.py new file mode 100644 index 00000000..41358848 --- /dev/null +++ b/daemon/core/api/tlv/structutils.py @@ -0,0 +1,43 @@ +""" +Utilities for working with python struct data. +""" + +import logging + + +def pack_values(clazz, packers): + """ + Pack values for a given legacy class. + + :param class clazz: class that will provide a pack method + :param list packers: a list of tuples that are used to pack values and transform them + :return: packed data string of all values + """ + + # iterate through tuples of values to pack + logging.debug("packing: %s", packers) + data = b"" + for packer in packers: + # check if a transformer was provided for valid values + transformer = None + if len(packer) == 2: + tlv_type, value = packer + elif len(packer) == 3: + tlv_type, value, transformer = packer + else: + raise RuntimeError("packer had more than 3 arguments") + + # only pack actual values and avoid packing empty strings + # protobuf defaults to empty strings and does no imply a value to set + if value is None or (isinstance(value, str) and not value): + continue + + # transform values as needed + if transformer: + value = transformer(value) + + # pack and add to existing data + logging.debug("packing: %s - %s type(%s)", tlv_type, value, type(value)) + data += clazz.pack(tlv_type.value, value) + + return data diff --git a/daemon/core/config.py b/daemon/core/config.py index 7a6ffa49..222abf01 100644 --- a/daemon/core/config.py +++ b/daemon/core/config.py @@ -4,112 +4,73 @@ Common support for configurable CORE objects. import logging from collections import OrderedDict -from dataclasses import dataclass, field -from typing import TYPE_CHECKING, Any, Optional, Union +from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Type, Union from core.emane.nodes import EmaneNet from core.emulator.enumerations import ConfigDataTypes -from core.errors import CoreConfigError from core.nodes.network import WlanNode -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.location.mobility import WirelessModel - WirelessModelType = type[WirelessModel] - -_BOOL_OPTIONS: set[str] = {"0", "1"} + WirelessModelType = Type[WirelessModel] -@dataclass class ConfigGroup: """ Defines configuration group tabs used for display by ConfigurationOptions. """ - name: str - start: int - stop: int + def __init__(self, name: str, start: int, stop: int) -> None: + """ + Creates a ConfigGroup object. + + :param name: configuration group display name + :param start: configurations start index for this group + :param stop: configurations stop index for this group + """ + self.name: str = name + self.start: int = start + self.stop: int = stop -@dataclass class Configuration: """ - Represents a configuration option. + Represents a configuration options. """ - id: str - type: ConfigDataTypes - label: str = None - default: str = "" - options: list[str] = field(default_factory=list) - group: str = "Configuration" + def __init__( + self, + _id: str, + _type: ConfigDataTypes, + label: str = None, + default: str = "", + options: List[str] = None, + ) -> None: + """ + Creates a Configuration object. - def __post_init__(self) -> None: - self.label = self.label if self.label else self.id - if self.type == ConfigDataTypes.BOOL: - if self.default and self.default not in _BOOL_OPTIONS: - raise CoreConfigError( - f"{self.id} bool value must be one of: {_BOOL_OPTIONS}: " - f"{self.default}" - ) - elif self.type == ConfigDataTypes.FLOAT: - if self.default: - try: - float(self.default) - except ValueError: - raise CoreConfigError( - f"{self.id} is not a valid float: {self.default}" - ) - elif self.type != ConfigDataTypes.STRING: - if self.default: - try: - int(self.default) - except ValueError: - raise CoreConfigError( - f"{self.id} is not a valid int: {self.default}" - ) + :param _id: unique name for configuration + :param _type: configuration data type + :param label: configuration label for display + :param default: default value for configuration + :param options: list options if this is a configuration with a combobox + """ + self.id: str = _id + self.type: ConfigDataTypes = _type + self.default: str = default + if not options: + options = [] + self.options: List[str] = options + if not label: + label = _id + self.label: str = label - -@dataclass -class ConfigBool(Configuration): - """ - Represents a boolean configuration option. - """ - - type: ConfigDataTypes = ConfigDataTypes.BOOL - value: bool = False - - -@dataclass -class ConfigFloat(Configuration): - """ - Represents a float configuration option. - """ - - type: ConfigDataTypes = ConfigDataTypes.FLOAT - value: float = 0.0 - - -@dataclass -class ConfigInt(Configuration): - """ - Represents an integer configuration option. - """ - - type: ConfigDataTypes = ConfigDataTypes.INT32 - value: int = 0 - - -@dataclass -class ConfigString(Configuration): - """ - Represents a string configuration option. - """ - - type: ConfigDataTypes = ConfigDataTypes.STRING - value: str = "" + def __str__(self): + return ( + f"{self.__class__.__name__}(id={self.id}, type={self.type}, " + f"default={self.default}, options={self.options})" + ) class ConfigurableOptions: @@ -118,10 +79,11 @@ class ConfigurableOptions: """ name: Optional[str] = None - options: list[Configuration] = [] + bitmap: Optional[str] = None + options: List[Configuration] = [] @classmethod - def configurations(cls) -> list[Configuration]: + def configurations(cls) -> List[Configuration]: """ Provides the configurations for this class. @@ -130,7 +92,7 @@ class ConfigurableOptions: return cls.options @classmethod - def config_groups(cls) -> list[ConfigGroup]: + def config_groups(cls) -> List[ConfigGroup]: """ Defines how configurations are grouped. @@ -139,7 +101,7 @@ class ConfigurableOptions: return [ConfigGroup("Options", 1, len(cls.configurations()))] @classmethod - def default_values(cls) -> dict[str, str]: + def default_values(cls) -> Dict[str, str]: """ Provides an ordered mapping of configuration keys to default values. @@ -165,7 +127,7 @@ class ConfigurableManager: """ self.node_configurations = {} - def nodes(self) -> list[int]: + def nodes(self) -> List[int]: """ Retrieves the ids of all node configurations known by this manager. @@ -208,7 +170,7 @@ class ConfigurableManager: def set_configs( self, - config: dict[str, str], + config: Dict[str, str], node_id: int = _default_node, config_type: str = _default_type, ) -> None: @@ -220,7 +182,7 @@ class ConfigurableManager: :param config_type: configuration type to store configuration for :return: nothing """ - logger.debug( + logging.debug( "setting config for node(%s) type(%s): %s", node_id, config_type, config ) node_configs = self.node_configurations.setdefault(node_id, OrderedDict()) @@ -250,7 +212,7 @@ class ConfigurableManager: def get_configs( self, node_id: int = _default_node, config_type: str = _default_type - ) -> Optional[dict[str, str]]: + ) -> Optional[Dict[str, str]]: """ Retrieve configurations for a node and configuration type. @@ -264,7 +226,7 @@ class ConfigurableManager: result = node_configs.get(config_type) return result - def get_all_configs(self, node_id: int = _default_node) -> dict[str, Any]: + def get_all_configs(self, node_id: int = _default_node) -> Dict[str, Any]: """ Retrieve all current configuration types for a node. @@ -284,11 +246,11 @@ class ModelManager(ConfigurableManager): Creates a ModelManager object. """ super().__init__() - self.models: dict[str, Any] = {} - self.node_models: dict[int, str] = {} + self.models: Dict[str, Any] = {} + self.node_models: Dict[int, str] = {} def set_model_config( - self, node_id: int, model_name: str, config: dict[str, str] = None + self, node_id: int, model_name: str, config: Dict[str, str] = None ) -> None: """ Set configuration data for a model. @@ -317,7 +279,7 @@ class ModelManager(ConfigurableManager): # set configuration self.set_configs(model_config, node_id=node_id, config_type=model_name) - def get_model_config(self, node_id: int, model_name: str) -> dict[str, str]: + def get_model_config(self, node_id: int, model_name: str) -> Dict[str, str]: """ Retrieve configuration data for a model. @@ -342,7 +304,7 @@ class ModelManager(ConfigurableManager): self, node: Union[WlanNode, EmaneNet], model_class: "WirelessModelType", - config: dict[str, str] = None, + config: Dict[str, str] = None, ) -> None: """ Set model and model configuration for node. @@ -352,7 +314,7 @@ class ModelManager(ConfigurableManager): :param config: model configuration, None for default configuration :return: nothing """ - logger.debug( + logging.debug( "setting model(%s) for node(%s): %s", model_class.name, node.id, config ) self.set_model_config(node.id, model_class.name, config) @@ -361,7 +323,7 @@ class ModelManager(ConfigurableManager): def get_models( self, node: Union[WlanNode, EmaneNet] - ) -> list[tuple[type, dict[str, str]]]: + ) -> List[Tuple[Type, Dict[str, str]]]: """ Return a list of model classes and values for a net if one has been configured. This is invoked when exporting a session to XML. @@ -381,5 +343,5 @@ class ModelManager(ConfigurableManager): model_class = self.models[model_name] models.append((model_class, config)) - logger.debug("models for node(%s): %s", node.id, models) + logging.debug("models for node(%s): %s", node.id, models) return models diff --git a/daemon/core/configservice/base.py b/daemon/core/configservice/base.py index e15260eb..bb97e321 100644 --- a/daemon/core/configservice/base.py +++ b/daemon/core/configservice/base.py @@ -2,10 +2,9 @@ import abc import enum import inspect import logging +import pathlib import time -from dataclasses import dataclass -from pathlib import Path -from typing import Any, Optional +from typing import Any, Dict, List from mako import exceptions from mako.lookup import TemplateLookup @@ -15,24 +14,9 @@ from core.config import Configuration from core.errors import CoreCommandError, CoreError from core.nodes.base import CoreNode -logger = logging.getLogger(__name__) TEMPLATES_DIR: str = "templates" -def get_template_path(file_path: Path) -> str: - """ - Utility to convert a given file path to a valid template path format. - - :param file_path: file path to convert - :return: template path - """ - if file_path.is_absolute(): - template_path = str(file_path.relative_to("/")) - else: - template_path = str(file_path) - return template_path - - class ConfigServiceMode(enum.Enum): BLOCKING = 0 NON_BLOCKING = 1 @@ -43,18 +27,6 @@ class ConfigServiceBootError(Exception): pass -class ConfigServiceTemplateError(Exception): - pass - - -@dataclass -class ShadowDir: - path: str - src: Optional[str] = None - templates: bool = False - has_node_paths: bool = False - - class ConfigService(abc.ABC): """ Base class for creating configurable services. @@ -66,9 +38,6 @@ class ConfigService(abc.ABC): # time to wait in seconds for determining if service started successfully validation_timer: int = 5 - # directories to shadow and copy files from - shadow_directories: list[ShadowDir] = [] - def __init__(self, node: CoreNode) -> None: """ Create ConfigService instance. @@ -77,11 +46,11 @@ class ConfigService(abc.ABC): """ self.node: CoreNode = node class_file = inspect.getfile(self.__class__) - templates_path = Path(class_file).parent.joinpath(TEMPLATES_DIR) + templates_path = pathlib.Path(class_file).parent.joinpath(TEMPLATES_DIR) self.templates: TemplateLookup = TemplateLookup(directories=templates_path) - self.config: dict[str, Configuration] = {} - self.custom_templates: dict[str, str] = {} - self.custom_config: dict[str, str] = {} + self.config: Dict[str, Configuration] = {} + self.custom_templates: Dict[str, str] = {} + self.custom_config: Dict[str, str] = {} configs = self.default_configs[:] self._define_config(configs) @@ -108,47 +77,47 @@ class ConfigService(abc.ABC): @property @abc.abstractmethod - def directories(self) -> list[str]: + def directories(self) -> List[str]: raise NotImplementedError @property @abc.abstractmethod - def files(self) -> list[str]: + def files(self) -> List[str]: raise NotImplementedError @property @abc.abstractmethod - def default_configs(self) -> list[Configuration]: + def default_configs(self) -> List[Configuration]: raise NotImplementedError @property @abc.abstractmethod - def modes(self) -> dict[str, dict[str, str]]: + def modes(self) -> Dict[str, Dict[str, str]]: raise NotImplementedError @property @abc.abstractmethod - def executables(self) -> list[str]: + def executables(self) -> List[str]: raise NotImplementedError @property @abc.abstractmethod - def dependencies(self) -> list[str]: + def dependencies(self) -> List[str]: raise NotImplementedError @property @abc.abstractmethod - def startup(self) -> list[str]: + def startup(self) -> List[str]: raise NotImplementedError @property @abc.abstractmethod - def validate(self) -> list[str]: + def validate(self) -> List[str]: raise NotImplementedError @property @abc.abstractmethod - def shutdown(self) -> list[str]: + def shutdown(self) -> List[str]: raise NotImplementedError @property @@ -164,8 +133,7 @@ class ConfigService(abc.ABC): :return: nothing :raises ConfigServiceBootError: when there is an error starting service """ - logger.info("node(%s) service(%s) starting...", self.node.name, self.name) - self.create_shadow_dirs() + logging.info("node(%s) service(%s) starting...", self.node.name, self.name) self.create_dirs() self.create_files() wait = self.validation_mode == ConfigServiceMode.BLOCKING @@ -186,7 +154,7 @@ class ConfigService(abc.ABC): try: self.node.cmd(cmd) except CoreCommandError: - logger.exception( + logging.exception( f"node({self.node.name}) service({self.name}) " f"failed shutdown: {cmd}" ) @@ -200,64 +168,6 @@ class ConfigService(abc.ABC): self.stop() self.start() - def create_shadow_dirs(self) -> None: - """ - Creates a shadow of a host system directory recursively - to be mapped and live within a node. - - :return: nothing - :raises CoreError: when there is a failure creating a directory or file - """ - for shadow_dir in self.shadow_directories: - # setup shadow and src paths, using node unique paths when configured - shadow_path = Path(shadow_dir.path) - if shadow_dir.src is None: - src_path = shadow_path - else: - src_path = Path(shadow_dir.src) - if shadow_dir.has_node_paths: - src_path = src_path / self.node.name - # validate shadow and src paths - if not shadow_path.is_absolute(): - raise CoreError(f"shadow dir({shadow_path}) is not absolute") - if not src_path.is_absolute(): - raise CoreError(f"shadow source dir({src_path}) is not absolute") - if not src_path.is_dir(): - raise CoreError(f"shadow source dir({src_path}) does not exist") - # create root of the shadow path within node - logger.info( - "node(%s) creating shadow directory(%s) src(%s) node paths(%s) " - "templates(%s)", - self.node.name, - shadow_path, - src_path, - shadow_dir.has_node_paths, - shadow_dir.templates, - ) - self.node.create_dir(shadow_path) - # find all directories and files to create - dir_paths = [] - file_paths = [] - for path in src_path.rglob("*"): - shadow_src_path = shadow_path / path.relative_to(src_path) - if path.is_dir(): - dir_paths.append(shadow_src_path) - else: - file_paths.append((path, shadow_src_path)) - # create all directories within node - for path in dir_paths: - self.node.create_dir(path) - # create all files within node, from templates when configured - data = self.data() - templates = TemplateLookup(directories=src_path) - for path, dst_path in file_paths: - if shadow_dir.templates: - template = templates.get_template(path.name) - rendered = self._render(template, data) - self.node.create_file(dst_path, rendered) - else: - self.node.copy_file(path, dst_path) - def create_dirs(self) -> None: """ Creates directories for service. @@ -265,18 +175,16 @@ class ConfigService(abc.ABC): :return: nothing :raises CoreError: when there is a failure creating a directory """ - logger.debug("creating config service directories") - for directory in sorted(self.directories): - dir_path = Path(directory) + for directory in self.directories: try: - self.node.create_dir(dir_path) - except (CoreCommandError, CoreError): + self.node.privatedir(directory) + except (CoreCommandError, ValueError): raise CoreError( f"node({self.node.name}) service({self.name}) " f"failure to create service directory: {directory}" ) - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: """ Returns key/value data, used when rendering file templates. @@ -303,7 +211,7 @@ class ConfigService(abc.ABC): """ raise CoreError(f"service({self.name}) unknown template({name})") - def get_templates(self) -> dict[str, str]: + def get_templates(self) -> Dict[str, str]: """ Retrieves mapping of file names to templates for all cases, which includes custom templates, file templates, and text templates. @@ -311,53 +219,19 @@ class ConfigService(abc.ABC): :return: mapping of files to templates """ templates = {} - for file in self.files: - file_path = Path(file) - template_path = get_template_path(file_path) - if file in self.custom_templates: - template = self.custom_templates[file] + for name in self.files: + basename = pathlib.Path(name).name + if name in self.custom_templates: + template = self.custom_templates[name] template = self.clean_text(template) - elif self.templates.has_template(template_path): - template = self.templates.get_template(template_path).source + elif self.templates.has_template(basename): + template = self.templates.get_template(basename).source else: - try: - template = self.get_text_template(file) - except Exception as e: - raise ConfigServiceTemplateError( - f"node({self.node.name}) service({self.name}) file({file}) " - f"failure getting template: {e}" - ) + template = self.get_text_template(name) template = self.clean_text(template) - templates[file] = template + templates[name] = template return templates - def get_rendered_templates(self) -> dict[str, str]: - templates = {} - data = self.data() - for file in sorted(self.files): - rendered = self._get_rendered_template(file, data) - templates[file] = rendered - return templates - - def _get_rendered_template(self, file: str, data: dict[str, Any]) -> str: - file_path = Path(file) - template_path = get_template_path(file_path) - if file in self.custom_templates: - text = self.custom_templates[file] - rendered = self.render_text(text, data) - elif self.templates.has_template(template_path): - rendered = self.render_template(template_path, data) - else: - try: - text = self.get_text_template(file) - except Exception as e: - raise ConfigServiceTemplateError( - f"node({self.node.name}) service({self.name}) file({file}) " - f"failure getting template: {e}" - ) - rendered = self.render_text(text, data) - return rendered - def create_files(self) -> None: """ Creates service files inside associated node. @@ -365,13 +239,24 @@ class ConfigService(abc.ABC): :return: nothing """ data = self.data() - for file in sorted(self.files): - logger.debug( - "node(%s) service(%s) template(%s)", self.node.name, self.name, file + for name in self.files: + basename = pathlib.Path(name).name + if name in self.custom_templates: + text = self.custom_templates[name] + rendered = self.render_text(text, data) + elif self.templates.has_template(basename): + rendered = self.render_template(basename, data) + else: + text = self.get_text_template(name) + rendered = self.render_text(text, data) + logging.debug( + "node(%s) service(%s) template(%s): \n%s", + self.node.name, + self.name, + name, + rendered, ) - rendered = self._get_rendered_template(file, data) - file_path = Path(file) - self.node.create_file(file_path, rendered) + self.node.nodefile(name, rendered) def run_startup(self, wait: bool) -> None: """ @@ -415,7 +300,7 @@ class ConfigService(abc.ABC): del cmds[index] index += 1 except CoreCommandError: - logger.debug( + logging.debug( f"node({self.node.name}) service({self.name}) " f"validate command failed: {cmd}" ) @@ -426,7 +311,7 @@ class ConfigService(abc.ABC): f"node({self.node.name}) service({self.name}) failed to validate" ) - def _render(self, template: Template, data: dict[str, Any] = None) -> str: + def _render(self, template: Template, data: Dict[str, Any] = None) -> str: """ Renders template providing all associated data to template. @@ -440,7 +325,7 @@ class ConfigService(abc.ABC): node=self.node, config=self.render_config(), **data ) - def render_text(self, text: str, data: dict[str, Any] = None) -> str: + def render_text(self, text: str, data: Dict[str, Any] = None) -> str: """ Renders text based template providing all associated data to template. @@ -458,24 +343,24 @@ class ConfigService(abc.ABC): f"{exceptions.text_error_template().render_unicode()}" ) - def render_template(self, template_path: str, data: dict[str, Any] = None) -> str: + def render_template(self, basename: str, data: Dict[str, Any] = None) -> str: """ Renders file based template providing all associated data to template. - :param template_path: path of file to render + :param basename: base name for file to render :param data: service specific defined data for template :return: rendered template """ try: - template = self.templates.get_template(template_path) + template = self.templates.get_template(basename) return self._render(template, data) except Exception: raise CoreError( - f"node({self.node.name}) service({self.name}) file({template_path})" - f"{exceptions.text_error_template().render_unicode()}" + f"node({self.node.name}) service({self.name}) " + f"{exceptions.text_error_template().render_template()}" ) - def _define_config(self, configs: list[Configuration]) -> None: + def _define_config(self, configs: List[Configuration]) -> None: """ Initializes default configuration data. @@ -485,7 +370,7 @@ class ConfigService(abc.ABC): for config in configs: self.config[config.id] = config - def render_config(self) -> dict[str, str]: + def render_config(self) -> Dict[str, str]: """ Returns configuration data key/value pairs for rendering a template. @@ -496,7 +381,7 @@ class ConfigService(abc.ABC): else: return {k: v.default for k, v in self.config.items()} - def set_config(self, data: dict[str, str]) -> None: + def set_config(self, data: Dict[str, str]) -> None: """ Set configuration data from key/value pairs. diff --git a/daemon/core/configservice/dependencies.py b/daemon/core/configservice/dependencies.py index 1fbc4e48..be1c45e7 100644 --- a/daemon/core/configservice/dependencies.py +++ b/daemon/core/configservice/dependencies.py @@ -1,7 +1,5 @@ import logging -from typing import TYPE_CHECKING - -logger = logging.getLogger(__name__) +from typing import TYPE_CHECKING, Dict, List, Set if TYPE_CHECKING: from core.configservice.base import ConfigService @@ -12,16 +10,16 @@ class ConfigServiceDependencies: Generates sets of services to start in order of their dependencies. """ - def __init__(self, services: dict[str, "ConfigService"]) -> None: + def __init__(self, services: Dict[str, "ConfigService"]) -> None: """ Create a ConfigServiceDependencies instance. :param services: services for determining dependency sets """ # helpers to check validity - self.dependents: dict[str, set[str]] = {} - self.started: set[str] = set() - self.node_services: dict[str, "ConfigService"] = {} + self.dependents: Dict[str, Set[str]] = {} + self.started: Set[str] = set() + self.node_services: Dict[str, "ConfigService"] = {} for service in services.values(): self.node_services[service.name] = service for dependency in service.dependencies: @@ -29,11 +27,11 @@ class ConfigServiceDependencies: dependents.add(service.name) # used to find paths - self.path: list["ConfigService"] = [] - self.visited: set[str] = set() - self.visiting: set[str] = set() + self.path: List["ConfigService"] = [] + self.visited: Set[str] = set() + self.visiting: Set[str] = set() - def startup_paths(self) -> list[list["ConfigService"]]: + def startup_paths(self) -> List[List["ConfigService"]]: """ Find startup path sets based on service dependencies. @@ -43,7 +41,7 @@ class ConfigServiceDependencies: for name in self.node_services: service = self.node_services[name] if service.name in self.started: - logger.debug( + logging.debug( "skipping service that will already be started: %s", service.name ) continue @@ -54,8 +52,8 @@ class ConfigServiceDependencies: if self.started != set(self.node_services): raise ValueError( - f"failure to start all services: {self.started} != " - f"{self.node_services.keys()}" + "failure to start all services: %s != %s" + % (self.started, self.node_services.keys()) ) return paths @@ -70,25 +68,25 @@ class ConfigServiceDependencies: self.visited.clear() self.visiting.clear() - def _start(self, service: "ConfigService") -> list["ConfigService"]: + def _start(self, service: "ConfigService") -> List["ConfigService"]: """ Starts a oath for checking dependencies for a given service. :param service: service to check dependencies for :return: list of config services to start in order """ - logger.debug("starting service dependency check: %s", service.name) + logging.debug("starting service dependency check: %s", service.name) self._reset() return self._visit(service) - def _visit(self, current_service: "ConfigService") -> list["ConfigService"]: + def _visit(self, current_service: "ConfigService") -> List["ConfigService"]: """ Visits a service when discovering dependency chains for service. :param current_service: service being visited :return: list of dependent services for a visited service """ - logger.debug("visiting service(%s): %s", current_service.name, self.path) + logging.debug("visiting service(%s): %s", current_service.name, self.path) self.visited.add(current_service.name) self.visiting.add(current_service.name) @@ -96,14 +94,14 @@ class ConfigServiceDependencies: for service_name in current_service.dependencies: if service_name not in self.node_services: raise ValueError( - "required dependency was not included in node " - f"services: {service_name}" + "required dependency was not included in node services: %s" + % service_name ) if service_name in self.visiting: raise ValueError( - f"cyclic dependency at service({current_service.name}): " - f"{service_name}" + "cyclic dependency at service(%s): %s" + % (current_service.name, service_name) ) if service_name not in self.visited: @@ -111,7 +109,7 @@ class ConfigServiceDependencies: self._visit(service) # add service when bottom is found - logger.debug("adding service to startup path: %s", current_service.name) + logging.debug("adding service to startup path: %s", current_service.name) self.started.add(current_service.name) self.path.append(current_service) self.visiting.remove(current_service.name) diff --git a/daemon/core/configservice/manager.py b/daemon/core/configservice/manager.py index 542f3cc5..83657655 100644 --- a/daemon/core/configservice/manager.py +++ b/daemon/core/configservice/manager.py @@ -1,14 +1,11 @@ import logging import pathlib -import pkgutil -from pathlib import Path +from typing import Dict, List, Type -from core import configservices, utils +from core import utils from core.configservice.base import ConfigService from core.errors import CoreError -logger = logging.getLogger(__name__) - class ConfigServiceManager: """ @@ -19,9 +16,9 @@ class ConfigServiceManager: """ Create a ConfigServiceManager instance. """ - self.services: dict[str, type[ConfigService]] = {} + self.services: Dict[str, Type[ConfigService]] = {} - def get_service(self, name: str) -> type[ConfigService]: + def get_service(self, name: str) -> Type[ConfigService]: """ Retrieve a service by name. @@ -31,10 +28,10 @@ class ConfigServiceManager: """ service_class = self.services.get(name) if service_class is None: - raise CoreError(f"service does not exist {name}") + raise CoreError(f"service does not exit {name}") return service_class - def add(self, service: type[ConfigService]) -> None: + def add(self, service: Type[ConfigService]) -> None: """ Add service to manager, checking service requirements have been met. @@ -43,7 +40,7 @@ class ConfigServiceManager: :raises CoreError: when service is a duplicate or has unmet executables """ name = service.name - logger.debug( + logging.debug( "loading service: class(%s) name(%s)", service.__class__.__name__, name ) @@ -58,46 +55,27 @@ class ConfigServiceManager: except CoreError as e: raise CoreError(f"config service({service.name}): {e}") - # make service available + # make service available self.services[name] = service - def load_locals(self) -> list[str]: + def load(self, path: str) -> List[str]: """ - Search and add config service from local core module. - - :return: list of errors when loading services - """ - errors = [] - for module_info in pkgutil.walk_packages( - configservices.__path__, f"{configservices.__name__}." - ): - services = utils.load_module(module_info.name, ConfigService) - for service in services: - try: - self.add(service) - except CoreError as e: - errors.append(service.name) - logger.debug("not loading config service(%s): %s", service.name, e) - return errors - - def load(self, path: Path) -> list[str]: - """ - Search path provided for config services and add them for being managed. + Search path provided for configurable services and add them for being managed. :param path: path to search configurable services - :return: list errors when loading services + :return: list errors when loading and adding services """ path = pathlib.Path(path) subdirs = [x for x in path.iterdir() if x.is_dir()] subdirs.append(path) service_errors = [] for subdir in subdirs: - logger.debug("loading config services from: %s", subdir) - services = utils.load_classes(subdir, ConfigService) + logging.debug("loading config services from: %s", subdir) + services = utils.load_classes(str(subdir), ConfigService) for service in services: try: self.add(service) except CoreError as e: service_errors.append(service.name) - logger.debug("not loading service(%s): %s", service.name, e) + logging.debug("not loading service(%s): %s", service.name, e) return service_errors diff --git a/daemon/core/configservices/frrservices/services.py b/daemon/core/configservices/frrservices/services.py index 378d42f8..d6ebacf3 100644 --- a/daemon/core/configservices/frrservices/services.py +++ b/daemon/core/configservices/frrservices/services.py @@ -1,29 +1,17 @@ import abc -from typing import Any +from typing import Any, Dict, List from core.config import Configuration from core.configservice.base import ConfigService, ConfigServiceMode from core.emane.nodes import EmaneNet -from core.nodes.base import CoreNodeBase, NodeBase +from core.nodes.base import CoreNodeBase from core.nodes.interface import DEFAULT_MTU, CoreInterface -from core.nodes.network import PtpNet, WlanNode -from core.nodes.physical import Rj45Node -from core.nodes.wireless import WirelessNode +from core.nodes.network import WlanNode GROUP: str = "FRR" FRR_STATE_DIR: str = "/var/run/frr" -def is_wireless(node: NodeBase) -> bool: - """ - Check if the node is a wireless type node. - - :param node: node to check type for - :return: True if wireless type, False otherwise - """ - return isinstance(node, (WlanNode, EmaneNet, WirelessNode)) - - def has_mtu_mismatch(iface: CoreInterface) -> bool: """ Helper to detect MTU mismatch and add the appropriate FRR @@ -65,47 +53,32 @@ def get_router_id(node: CoreNodeBase) -> str: return "0.0.0.0" -def rj45_check(iface: CoreInterface) -> bool: - """ - Helper to detect whether interface is connected an external RJ45 - link. - """ - if iface.net: - for peer_iface in iface.net.get_ifaces(): - if peer_iface == iface: - continue - if isinstance(peer_iface.node, Rj45Node): - return True - return False - - class FRRZebra(ConfigService): name: str = "FRRzebra" group: str = GROUP - directories: list[str] = ["/usr/local/etc/frr", "/var/run/frr", "/var/log/frr"] - files: list[str] = [ + directories: List[str] = ["/usr/local/etc/frr", "/var/run/frr", "/var/log/frr"] + files: List[str] = [ "/usr/local/etc/frr/frr.conf", "frrboot.sh", "/usr/local/etc/frr/vtysh.conf", "/usr/local/etc/frr/daemons", ] - executables: list[str] = ["zebra"] - dependencies: list[str] = [] - startup: list[str] = ["bash frrboot.sh zebra"] - validate: list[str] = ["pidof zebra"] - shutdown: list[str] = ["killall zebra"] + executables: List[str] = ["zebra"] + dependencies: List[str] = [] + startup: List[str] = ["bash frrboot.sh zebra"] + validate: List[str] = ["pidof zebra"] + shutdown: List[str] = ["killall zebra"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: frr_conf = self.files[0] - frr_bin_search = self.node.session.options.get( + frr_bin_search = self.node.session.options.get_config( "frr_bin_search", default="/usr/local/bin /usr/bin /usr/lib/frr" ).strip('"') - frr_sbin_search = self.node.session.options.get( - "frr_sbin_search", - default="/usr/local/sbin /usr/sbin /usr/lib/frr /usr/libexec/frr", + frr_sbin_search = self.node.session.options.get_config( + "frr_sbin_search", default="/usr/local/sbin /usr/sbin /usr/lib/frr" ).strip('"') services = [] @@ -130,7 +103,8 @@ class FRRZebra(ConfigService): ip4s.append(str(ip4.ip)) for ip6 in iface.ip6s: ip6s.append(str(ip6.ip)) - ifaces.append((iface, ip4s, ip6s, iface.control)) + is_control = getattr(iface, "control", False) + ifaces.append((iface, ip4s, ip6s, is_control)) return dict( frr_conf=frr_conf, @@ -146,16 +120,16 @@ class FRRZebra(ConfigService): class FrrService(abc.ABC): group: str = GROUP - directories: list[str] = [] - files: list[str] = [] - executables: list[str] = [] - dependencies: list[str] = ["FRRzebra"] - startup: list[str] = [] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = [] + executables: List[str] = [] + dependencies: List[str] = ["FRRzebra"] + startup: List[str] = [] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} ipv4_routing: bool = False ipv6_routing: bool = False @@ -176,8 +150,8 @@ class FRROspfv2(FrrService, ConfigService): """ name: str = "FRROSPFv2" - shutdown: list[str] = ["killall ospfd"] - validate: list[str] = ["pidof ospfd"] + shutdown: List[str] = ["killall ospfd"] + validate: List[str] = ["pidof ospfd"] ipv4_routing: bool = True def frr_config(self) -> str: @@ -185,7 +159,7 @@ class FRROspfv2(FrrService, ConfigService): addresses = [] for iface in self.node.get_ifaces(control=False): for ip4 in iface.ip4s: - addresses.append(str(ip4)) + addresses.append(str(ip4.ip)) data = dict(router_id=router_id, addresses=addresses) text = """ router ospf @@ -193,31 +167,15 @@ class FRROspfv2(FrrService, ConfigService): % for addr in addresses: network ${addr} area 0 % endfor - ospf opaque-lsa ! """ return self.render_text(text, data) def frr_iface_config(self, iface: CoreInterface) -> str: - has_mtu = has_mtu_mismatch(iface) - has_rj45 = rj45_check(iface) - is_ptp = isinstance(iface.net, PtpNet) - data = dict(has_mtu=has_mtu, is_ptp=is_ptp, has_rj45=has_rj45) - text = """ - % if has_mtu: - ip ospf mtu-ignore - % endif - % if has_rj45: - <% return STOP_RENDERING %> - % endif - % if is_ptp: - ip ospf network point-to-point - % endif - ip ospf hello-interval 2 - ip ospf dead-interval 6 - ip ospf retransmit-interval 5 - """ - return self.render_text(text, data) + if has_mtu_mismatch(iface): + return "ip ospf mtu-ignore" + else: + return "" class FRROspfv3(FrrService, ConfigService): @@ -228,8 +186,8 @@ class FRROspfv3(FrrService, ConfigService): """ name: str = "FRROSPFv3" - shutdown: list[str] = ["killall ospf6d"] - validate: list[str] = ["pidof ospf6d"] + shutdown: List[str] = ["killall ospf6d"] + validate: List[str] = ["pidof ospf6d"] ipv4_routing: bool = True ipv6_routing: bool = True @@ -265,8 +223,8 @@ class FRRBgp(FrrService, ConfigService): """ name: str = "FRRBGP" - shutdown: list[str] = ["killall bgpd"] - validate: list[str] = ["pidof bgpd"] + shutdown: List[str] = ["killall bgpd"] + validate: List[str] = ["pidof bgpd"] custom_needed: bool = True ipv4_routing: bool = True ipv6_routing: bool = True @@ -295,8 +253,8 @@ class FRRRip(FrrService, ConfigService): """ name: str = "FRRRIP" - shutdown: list[str] = ["killall ripd"] - validate: list[str] = ["pidof ripd"] + shutdown: List[str] = ["killall ripd"] + validate: List[str] = ["pidof ripd"] ipv4_routing: bool = True def frr_config(self) -> str: @@ -320,8 +278,8 @@ class FRRRipng(FrrService, ConfigService): """ name: str = "FRRRIPNG" - shutdown: list[str] = ["killall ripngd"] - validate: list[str] = ["pidof ripngd"] + shutdown: List[str] = ["killall ripngd"] + validate: List[str] = ["pidof ripngd"] ipv6_routing: bool = True def frr_config(self) -> str: @@ -346,8 +304,8 @@ class FRRBabel(FrrService, ConfigService): """ name: str = "FRRBabel" - shutdown: list[str] = ["killall babeld"] - validate: list[str] = ["pidof babeld"] + shutdown: List[str] = ["killall babeld"] + validate: List[str] = ["pidof babeld"] ipv6_routing: bool = True def frr_config(self) -> str: @@ -367,7 +325,7 @@ class FRRBabel(FrrService, ConfigService): return self.render_text(text, data) def frr_iface_config(self, iface: CoreInterface) -> str: - if is_wireless(iface.net): + if isinstance(iface.net, (WlanNode, EmaneNet)): text = """ babel wireless no babel split-horizon @@ -386,8 +344,8 @@ class FRRpimd(FrrService, ConfigService): """ name: str = "FRRpimd" - shutdown: list[str] = ["killall pimd"] - validate: list[str] = ["pidof pimd"] + shutdown: List[str] = ["killall pimd"] + validate: List[str] = ["pidof pimd"] ipv4_routing: bool = True def frr_config(self) -> str: diff --git a/daemon/core/configservices/frrservices/templates/usr/local/etc/frr/daemons b/daemon/core/configservices/frrservices/templates/daemons similarity index 100% rename from daemon/core/configservices/frrservices/templates/usr/local/etc/frr/daemons rename to daemon/core/configservices/frrservices/templates/daemons diff --git a/daemon/core/configservices/frrservices/templates/usr/local/etc/frr/frr.conf b/daemon/core/configservices/frrservices/templates/frr.conf similarity index 100% rename from daemon/core/configservices/frrservices/templates/usr/local/etc/frr/frr.conf rename to daemon/core/configservices/frrservices/templates/frr.conf diff --git a/daemon/core/configservices/frrservices/templates/frrboot.sh b/daemon/core/configservices/frrservices/templates/frrboot.sh index c1c11d28..db47b6d1 100644 --- a/daemon/core/configservices/frrservices/templates/frrboot.sh +++ b/daemon/core/configservices/frrservices/templates/frrboot.sh @@ -48,10 +48,6 @@ bootdaemon() flags="$flags -6" fi - if [ "$1" = "ospfd" ]; then - flags="$flags --apiserver" - fi - #force FRR to use CORE generated conf file flags="$flags -d -f $FRR_CONF" $FRR_SBIN_DIR/$1 $flags diff --git a/daemon/core/configservices/frrservices/templates/usr/local/etc/frr/vtysh.conf b/daemon/core/configservices/frrservices/templates/vtysh.conf similarity index 100% rename from daemon/core/configservices/frrservices/templates/usr/local/etc/frr/vtysh.conf rename to daemon/core/configservices/frrservices/templates/vtysh.conf diff --git a/daemon/core/configservices/nrlservices/services.py b/daemon/core/configservices/nrlservices/services.py index 3002cd94..3f911aef 100644 --- a/daemon/core/configservices/nrlservices/services.py +++ b/daemon/core/configservices/nrlservices/services.py @@ -1,4 +1,4 @@ -from typing import Any +from typing import Any, Dict, List from core import utils from core.config import Configuration @@ -10,18 +10,18 @@ GROUP: str = "ProtoSvc" class MgenSinkService(ConfigService): name: str = "MGEN_Sink" group: str = GROUP - directories: list[str] = [] - files: list[str] = ["mgensink.sh", "sink.mgen"] - executables: list[str] = ["mgen"] - dependencies: list[str] = [] - startup: list[str] = ["bash mgensink.sh"] - validate: list[str] = ["pidof mgen"] - shutdown: list[str] = ["killall mgen"] + directories: List[str] = [] + files: List[str] = ["mgensink.sh", "sink.mgen"] + executables: List[str] = ["mgen"] + dependencies: List[str] = [] + startup: List[str] = ["bash mgensink.sh"] + validate: List[str] = ["pidof mgen"] + shutdown: List[str] = ["killall mgen"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: ifnames = [] for iface in self.node.get_ifaces(): name = utils.sysctl_devname(iface.name) @@ -32,18 +32,18 @@ class MgenSinkService(ConfigService): class NrlNhdp(ConfigService): name: str = "NHDP" group: str = GROUP - directories: list[str] = [] - files: list[str] = ["nrlnhdp.sh"] - executables: list[str] = ["nrlnhdp"] - dependencies: list[str] = [] - startup: list[str] = ["bash nrlnhdp.sh"] - validate: list[str] = ["pidof nrlnhdp"] - shutdown: list[str] = ["killall nrlnhdp"] + directories: List[str] = [] + files: List[str] = ["nrlnhdp.sh"] + executables: List[str] = ["nrlnhdp"] + dependencies: List[str] = [] + startup: List[str] = ["bash nrlnhdp.sh"] + validate: List[str] = ["pidof nrlnhdp"] + shutdown: List[str] = ["killall nrlnhdp"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: has_smf = "SMF" in self.node.config_services ifnames = [] for iface in self.node.get_ifaces(control=False): @@ -54,18 +54,19 @@ class NrlNhdp(ConfigService): class NrlSmf(ConfigService): name: str = "SMF" group: str = GROUP - directories: list[str] = [] - files: list[str] = ["startsmf.sh"] - executables: list[str] = ["nrlsmf", "killall"] - dependencies: list[str] = [] - startup: list[str] = ["bash startsmf.sh"] - validate: list[str] = ["pidof nrlsmf"] - shutdown: list[str] = ["killall nrlsmf"] + directories: List[str] = [] + files: List[str] = ["startsmf.sh"] + executables: List[str] = ["nrlsmf", "killall"] + dependencies: List[str] = [] + startup: List[str] = ["bash startsmf.sh"] + validate: List[str] = ["pidof nrlsmf"] + shutdown: List[str] = ["killall nrlsmf"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: + has_arouted = "arouted" in self.node.config_services has_nhdp = "NHDP" in self.node.config_services has_olsr = "OLSR" in self.node.config_services ifnames = [] @@ -77,25 +78,29 @@ class NrlSmf(ConfigService): ip4_prefix = f"{ip4.ip}/{24}" break return dict( - has_nhdp=has_nhdp, has_olsr=has_olsr, ifnames=ifnames, ip4_prefix=ip4_prefix + has_arouted=has_arouted, + has_nhdp=has_nhdp, + has_olsr=has_olsr, + ifnames=ifnames, + ip4_prefix=ip4_prefix, ) class NrlOlsr(ConfigService): name: str = "OLSR" group: str = GROUP - directories: list[str] = [] - files: list[str] = ["nrlolsrd.sh"] - executables: list[str] = ["nrlolsrd"] - dependencies: list[str] = [] - startup: list[str] = ["bash nrlolsrd.sh"] - validate: list[str] = ["pidof nrlolsrd"] - shutdown: list[str] = ["killall nrlolsrd"] + directories: List[str] = [] + files: List[str] = ["nrlolsrd.sh"] + executables: List[str] = ["nrlolsrd"] + dependencies: List[str] = [] + startup: List[str] = ["bash nrlolsrd.sh"] + validate: List[str] = ["pidof nrlolsrd"] + shutdown: List[str] = ["killall nrlolsrd"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: has_smf = "SMF" in self.node.config_services has_zebra = "zebra" in self.node.config_services ifname = None @@ -108,18 +113,18 @@ class NrlOlsr(ConfigService): class NrlOlsrv2(ConfigService): name: str = "OLSRv2" group: str = GROUP - directories: list[str] = [] - files: list[str] = ["nrlolsrv2.sh"] - executables: list[str] = ["nrlolsrv2"] - dependencies: list[str] = [] - startup: list[str] = ["bash nrlolsrv2.sh"] - validate: list[str] = ["pidof nrlolsrv2"] - shutdown: list[str] = ["killall nrlolsrv2"] + directories: List[str] = [] + files: List[str] = ["nrlolsrv2.sh"] + executables: List[str] = ["nrlolsrv2"] + dependencies: List[str] = [] + startup: List[str] = ["bash nrlolsrv2.sh"] + validate: List[str] = ["pidof nrlolsrv2"] + shutdown: List[str] = ["killall nrlolsrv2"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: has_smf = "SMF" in self.node.config_services ifnames = [] for iface in self.node.get_ifaces(control=False): @@ -130,18 +135,18 @@ class NrlOlsrv2(ConfigService): class OlsrOrg(ConfigService): name: str = "OLSRORG" group: str = GROUP - directories: list[str] = ["/etc/olsrd"] - files: list[str] = ["olsrd.sh", "/etc/olsrd/olsrd.conf"] - executables: list[str] = ["olsrd"] - dependencies: list[str] = [] - startup: list[str] = ["bash olsrd.sh"] - validate: list[str] = ["pidof olsrd"] - shutdown: list[str] = ["killall olsrd"] + directories: List[str] = ["/etc/olsrd"] + files: List[str] = ["olsrd.sh", "/etc/olsrd/olsrd.conf"] + executables: List[str] = ["olsrd"] + dependencies: List[str] = [] + startup: List[str] = ["bash olsrd.sh"] + validate: List[str] = ["pidof olsrd"] + shutdown: List[str] = ["killall olsrd"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: has_smf = "SMF" in self.node.config_services ifnames = [] for iface in self.node.get_ifaces(control=False): @@ -152,13 +157,37 @@ class OlsrOrg(ConfigService): class MgenActor(ConfigService): name: str = "MgenActor" group: str = GROUP - directories: list[str] = [] - files: list[str] = ["start_mgen_actor.sh"] - executables: list[str] = ["mgen"] - dependencies: list[str] = [] - startup: list[str] = ["bash start_mgen_actor.sh"] - validate: list[str] = ["pidof mgen"] - shutdown: list[str] = ["killall mgen"] + directories: List[str] = [] + files: List[str] = ["start_mgen_actor.sh"] + executables: List[str] = ["mgen"] + dependencies: List[str] = [] + startup: List[str] = ["bash start_mgen_actor.sh"] + validate: List[str] = ["pidof mgen"] + shutdown: List[str] = ["killall mgen"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} + + +class Arouted(ConfigService): + name: str = "arouted" + group: str = GROUP + directories: List[str] = [] + files: List[str] = ["startarouted.sh"] + executables: List[str] = ["arouted"] + dependencies: List[str] = [] + startup: List[str] = ["bash startarouted.sh"] + validate: List[str] = ["pidof arouted"] + shutdown: List[str] = ["pkill arouted"] + validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} + + def data(self) -> Dict[str, Any]: + ip4_prefix = None + for iface in self.node.get_ifaces(control=False): + ip4 = iface.get_ip4() + if ip4: + ip4_prefix = f"{ip4.ip}/{24}" + break + return dict(ip4_prefix=ip4_prefix) diff --git a/daemon/core/configservices/nrlservices/templates/etc/olsrd/olsrd.conf b/daemon/core/configservices/nrlservices/templates/olsrd.conf similarity index 100% rename from daemon/core/configservices/nrlservices/templates/etc/olsrd/olsrd.conf rename to daemon/core/configservices/nrlservices/templates/olsrd.conf diff --git a/daemon/core/configservices/nrlservices/templates/startarouted.sh b/daemon/core/configservices/nrlservices/templates/startarouted.sh new file mode 100644 index 00000000..20bcc45e --- /dev/null +++ b/daemon/core/configservices/nrlservices/templates/startarouted.sh @@ -0,0 +1,15 @@ +#!/bin/sh +for f in "/tmp/${node.name}_smf"; do + count=1 + until [ -e "$f" ]; do + if [ $count -eq 10 ]; then + echo "ERROR: nrlmsf pipe not found: $f" >&2 + exit 1 + fi + sleep 0.1 + count=$(($count + 1)) + done +done + +ip route add ${ip4_prefix} dev lo +arouted instance ${node.name}_smf tap ${node.name}_tap stability 10 2>&1 > /var/log/arouted.log & diff --git a/daemon/core/configservices/nrlservices/templates/startsmf.sh b/daemon/core/configservices/nrlservices/templates/startsmf.sh index 458b3ee9..921568de 100644 --- a/daemon/core/configservices/nrlservices/templates/startsmf.sh +++ b/daemon/core/configservices/nrlservices/templates/startsmf.sh @@ -1,5 +1,8 @@ <% ifaces = ",".join(ifnames) + arouted = "" + if has_arouted: + arouted = "tap %s_tap unicast %s push lo,%s resequence on" % (node.name, ip4_prefix, ifnames[0]) if has_nhdp: flood = "ecds" elif has_olsr: @@ -9,4 +12,4 @@ %> #!/bin/sh # auto-generated by NrlSmf service -nrlsmf instance ${node.name}_smf ${flood} ${ifaces} hash MD5 log /var/log/nrlsmf.log < /dev/null > /dev/null 2>&1 & +nrlsmf instance ${node.name}_smf ${ifaces} ${arouted} ${flood} hash MD5 log /var/log/nrlsmf.log < /dev/null > /dev/null 2>&1 & diff --git a/daemon/core/configservices/quaggaservices/services.py b/daemon/core/configservices/quaggaservices/services.py index 8b4d4909..d3083ab6 100644 --- a/daemon/core/configservices/quaggaservices/services.py +++ b/daemon/core/configservices/quaggaservices/services.py @@ -1,31 +1,18 @@ import abc import logging -from typing import Any +from typing import Any, Dict, List from core.config import Configuration from core.configservice.base import ConfigService, ConfigServiceMode from core.emane.nodes import EmaneNet -from core.nodes.base import CoreNodeBase, NodeBase +from core.nodes.base import CoreNodeBase from core.nodes.interface import DEFAULT_MTU, CoreInterface -from core.nodes.network import PtpNet, WlanNode -from core.nodes.physical import Rj45Node -from core.nodes.wireless import WirelessNode +from core.nodes.network import WlanNode -logger = logging.getLogger(__name__) GROUP: str = "Quagga" QUAGGA_STATE_DIR: str = "/var/run/quagga" -def is_wireless(node: NodeBase) -> bool: - """ - Check if the node is a wireless type node. - - :param node: node to check type for - :return: True if wireless type, False otherwise - """ - return isinstance(node, (WlanNode, EmaneNet, WirelessNode)) - - def has_mtu_mismatch(iface: CoreInterface) -> bool: """ Helper to detect MTU mismatch and add the appropriate OSPF @@ -67,43 +54,29 @@ def get_router_id(node: CoreNodeBase) -> str: return "0.0.0.0" -def rj45_check(iface: CoreInterface) -> bool: - """ - Helper to detect whether interface is connected an external RJ45 - link. - """ - if iface.net: - for peer_iface in iface.net.get_ifaces(): - if peer_iface == iface: - continue - if isinstance(peer_iface.node, Rj45Node): - return True - return False - - class Zebra(ConfigService): name: str = "zebra" group: str = GROUP - directories: list[str] = ["/usr/local/etc/quagga", "/var/run/quagga"] - files: list[str] = [ + directories: List[str] = ["/usr/local/etc/quagga", "/var/run/quagga"] + files: List[str] = [ "/usr/local/etc/quagga/Quagga.conf", "quaggaboot.sh", "/usr/local/etc/quagga/vtysh.conf", ] - executables: list[str] = ["zebra"] - dependencies: list[str] = [] - startup: list[str] = ["bash quaggaboot.sh zebra"] - validate: list[str] = ["pidof zebra"] - shutdown: list[str] = ["killall zebra"] + executables: List[str] = ["zebra"] + dependencies: List[str] = [] + startup: List[str] = ["bash quaggaboot.sh zebra"] + validate: List[str] = ["pidof zebra"] + shutdown: List[str] = ["killall zebra"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: - quagga_bin_search = self.node.session.options.get( + def data(self) -> Dict[str, Any]: + quagga_bin_search = self.node.session.options.get_config( "quagga_bin_search", default="/usr/local/bin /usr/bin /usr/lib/quagga" ).strip('"') - quagga_sbin_search = self.node.session.options.get( + quagga_sbin_search = self.node.session.options.get_config( "quagga_sbin_search", default="/usr/local/sbin /usr/sbin /usr/lib/quagga" ).strip('"') quagga_state_dir = QUAGGA_STATE_DIR @@ -128,16 +101,11 @@ class Zebra(ConfigService): ip4s = [] ip6s = [] for ip4 in iface.ip4s: - ip4s.append(str(ip4)) + ip4s.append(str(ip4.ip)) for ip6 in iface.ip6s: - ip6s.append(str(ip6)) - configs = [] - if not iface.control: - for service in services: - config = service.quagga_iface_config(iface) - if config: - configs.append(config.split("\n")) - ifaces.append((iface, ip4s, ip6s, configs)) + ip6s.append(str(ip6.ip)) + is_control = getattr(iface, "control", False) + ifaces.append((iface, ip4s, ip6s, is_control)) return dict( quagga_bin_search=quagga_bin_search, @@ -153,16 +121,16 @@ class Zebra(ConfigService): class QuaggaService(abc.ABC): group: str = GROUP - directories: list[str] = [] - files: list[str] = [] - executables: list[str] = [] - dependencies: list[str] = ["zebra"] - startup: list[str] = [] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = [] + executables: List[str] = [] + dependencies: List[str] = ["zebra"] + startup: List[str] = [] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} ipv4_routing: bool = False ipv6_routing: bool = False @@ -183,37 +151,22 @@ class Ospfv2(QuaggaService, ConfigService): """ name: str = "OSPFv2" - validate: list[str] = ["pidof ospfd"] - shutdown: list[str] = ["killall ospfd"] + validate: List[str] = ["pidof ospfd"] + shutdown: List[str] = ["killall ospfd"] ipv4_routing: bool = True def quagga_iface_config(self, iface: CoreInterface) -> str: - has_mtu = has_mtu_mismatch(iface) - has_rj45 = rj45_check(iface) - is_ptp = isinstance(iface.net, PtpNet) - data = dict(has_mtu=has_mtu, is_ptp=is_ptp, has_rj45=has_rj45) - text = """ - % if has_mtu: - ip ospf mtu-ignore - % endif - % if has_rj45: - <% return STOP_RENDERING %> - % endif - % if is_ptp: - ip ospf network point-to-point - % endif - ip ospf hello-interval 2 - ip ospf dead-interval 6 - ip ospf retransmit-interval 5 - """ - return self.render_text(text, data) + if has_mtu_mismatch(iface): + return "ip ospf mtu-ignore" + else: + return "" def quagga_config(self) -> str: router_id = get_router_id(self.node) addresses = [] for iface in self.node.get_ifaces(control=False): for ip4 in iface.ip4s: - addresses.append(str(ip4)) + addresses.append(str(ip4.ip)) data = dict(router_id=router_id, addresses=addresses) text = """ router ospf @@ -234,8 +187,8 @@ class Ospfv3(QuaggaService, ConfigService): """ name: str = "OSPFv3" - shutdown: list[str] = ["killall ospf6d"] - validate: list[str] = ["pidof ospf6d"] + shutdown: List[str] = ["killall ospf6d"] + validate: List[str] = ["pidof ospf6d"] ipv4_routing: bool = True ipv6_routing: bool = True @@ -274,9 +227,15 @@ class Ospfv3mdr(Ospfv3): name: str = "OSPFv3MDR" + def data(self) -> Dict[str, Any]: + for iface in self.node.get_ifaces(): + is_wireless = isinstance(iface.net, (WlanNode, EmaneNet)) + logging.info("MDR wireless: %s", is_wireless) + return dict() + def quagga_iface_config(self, iface: CoreInterface) -> str: config = super().quagga_iface_config(iface) - if is_wireless(iface.net): + if isinstance(iface.net, (WlanNode, EmaneNet)): config = self.clean_text( f""" {config} @@ -300,12 +259,15 @@ class Bgp(QuaggaService, ConfigService): """ name: str = "BGP" - shutdown: list[str] = ["killall bgpd"] - validate: list[str] = ["pidof bgpd"] + shutdown: List[str] = ["killall bgpd"] + validate: List[str] = ["pidof bgpd"] ipv4_routing: bool = True ipv6_routing: bool = True def quagga_config(self) -> str: + return "" + + def quagga_iface_config(self, iface: CoreInterface) -> str: router_id = get_router_id(self.node) text = f""" ! BGP configuration @@ -319,9 +281,6 @@ class Bgp(QuaggaService, ConfigService): """ return self.clean_text(text) - def quagga_iface_config(self, iface: CoreInterface) -> str: - return "" - class Rip(QuaggaService, ConfigService): """ @@ -329,8 +288,8 @@ class Rip(QuaggaService, ConfigService): """ name: str = "RIP" - shutdown: list[str] = ["killall ripd"] - validate: list[str] = ["pidof ripd"] + shutdown: List[str] = ["killall ripd"] + validate: List[str] = ["pidof ripd"] ipv4_routing: bool = True def quagga_config(self) -> str: @@ -354,8 +313,8 @@ class Ripng(QuaggaService, ConfigService): """ name: str = "RIPNG" - shutdown: list[str] = ["killall ripngd"] - validate: list[str] = ["pidof ripngd"] + shutdown: List[str] = ["killall ripngd"] + validate: List[str] = ["pidof ripngd"] ipv6_routing: bool = True def quagga_config(self) -> str: @@ -380,8 +339,8 @@ class Babel(QuaggaService, ConfigService): """ name: str = "Babel" - shutdown: list[str] = ["killall babeld"] - validate: list[str] = ["pidof babeld"] + shutdown: List[str] = ["killall babeld"] + validate: List[str] = ["pidof babeld"] ipv6_routing: bool = True def quagga_config(self) -> str: @@ -401,7 +360,7 @@ class Babel(QuaggaService, ConfigService): return self.render_text(text, data) def quagga_iface_config(self, iface: CoreInterface) -> str: - if is_wireless(iface.net): + if isinstance(iface.net, (WlanNode, EmaneNet)): text = """ babel wireless no babel split-horizon @@ -420,8 +379,8 @@ class Xpimd(QuaggaService, ConfigService): """ name: str = "Xpimd" - shutdown: list[str] = ["killall xpimd"] - validate: list[str] = ["pidof xpimd"] + shutdown: List[str] = ["killall xpimd"] + validate: List[str] = ["pidof xpimd"] ipv4_routing: bool = True def quagga_config(self) -> str: diff --git a/daemon/core/configservices/quaggaservices/templates/usr/local/etc/quagga/Quagga.conf b/daemon/core/configservices/quaggaservices/templates/Quagga.conf similarity index 60% rename from daemon/core/configservices/quaggaservices/templates/usr/local/etc/quagga/Quagga.conf rename to daemon/core/configservices/quaggaservices/templates/Quagga.conf index b7916f96..1d69838f 100644 --- a/daemon/core/configservices/quaggaservices/templates/usr/local/etc/quagga/Quagga.conf +++ b/daemon/core/configservices/quaggaservices/templates/Quagga.conf @@ -1,4 +1,4 @@ -% for iface, ip4s, ip6s, configs in ifaces: +% for iface, ip4s, ip6s, is_control in ifaces: interface ${iface.name} % if want_ip4: % for addr in ip4s: @@ -10,11 +10,13 @@ interface ${iface.name} ipv6 address ${addr} % endfor % endif - % for config in configs: - % for line in config: + % if not is_control: + % for service in services: + % for line in service.quagga_iface_config(iface).split("\n"): ${line} + % endfor % endfor - % endfor + % endif ! % endfor diff --git a/daemon/core/configservices/quaggaservices/templates/usr/local/etc/quagga/vtysh.conf b/daemon/core/configservices/quaggaservices/templates/vtysh.conf similarity index 100% rename from daemon/core/configservices/quaggaservices/templates/usr/local/etc/quagga/vtysh.conf rename to daemon/core/configservices/quaggaservices/templates/vtysh.conf diff --git a/daemon/core/configservices/securityservices/services.py b/daemon/core/configservices/securityservices/services.py index e6243b2c..c656f5ca 100644 --- a/daemon/core/configservices/securityservices/services.py +++ b/daemon/core/configservices/securityservices/services.py @@ -1,7 +1,8 @@ -from typing import Any +from typing import Any, Dict, List -from core.config import ConfigString, Configuration +from core.config import Configuration from core.configservice.base import ConfigService, ConfigServiceMode +from core.emulator.enumerations import ConfigDataTypes GROUP_NAME: str = "Security" @@ -9,41 +10,71 @@ GROUP_NAME: str = "Security" class VpnClient(ConfigService): name: str = "VPNClient" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["vpnclient.sh"] - executables: list[str] = ["openvpn", "ip", "killall"] - dependencies: list[str] = [] - startup: list[str] = ["bash vpnclient.sh"] - validate: list[str] = ["pidof openvpn"] - shutdown: list[str] = ["killall openvpn"] + directories: List[str] = [] + files: List[str] = ["vpnclient.sh"] + executables: List[str] = ["openvpn", "ip", "killall"] + dependencies: List[str] = [] + startup: List[str] = ["bash vpnclient.sh"] + validate: List[str] = ["pidof openvpn"] + shutdown: List[str] = ["killall openvpn"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [ - ConfigString(id="keydir", label="Key Dir", default="/etc/core/keys"), - ConfigString(id="keyname", label="Key Name", default="client1"), - ConfigString(id="server", label="Server", default="10.0.2.10"), + default_configs: List[Configuration] = [ + Configuration( + _id="keydir", + _type=ConfigDataTypes.STRING, + label="Key Dir", + default="/etc/core/keys", + ), + Configuration( + _id="keyname", + _type=ConfigDataTypes.STRING, + label="Key Name", + default="client1", + ), + Configuration( + _id="server", + _type=ConfigDataTypes.STRING, + label="Server", + default="10.0.2.10", + ), ] - modes: dict[str, dict[str, str]] = {} + modes: Dict[str, Dict[str, str]] = {} class VpnServer(ConfigService): name: str = "VPNServer" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["vpnserver.sh"] - executables: list[str] = ["openvpn", "ip", "killall"] - dependencies: list[str] = [] - startup: list[str] = ["bash vpnserver.sh"] - validate: list[str] = ["pidof openvpn"] - shutdown: list[str] = ["killall openvpn"] + directories: List[str] = [] + files: List[str] = ["vpnserver.sh"] + executables: List[str] = ["openvpn", "ip", "killall"] + dependencies: List[str] = [] + startup: List[str] = ["bash vpnserver.sh"] + validate: List[str] = ["pidof openvpn"] + shutdown: List[str] = ["killall openvpn"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [ - ConfigString(id="keydir", label="Key Dir", default="/etc/core/keys"), - ConfigString(id="keyname", label="Key Name", default="server"), - ConfigString(id="subnet", label="Subnet", default="10.0.200.0"), + default_configs: List[Configuration] = [ + Configuration( + _id="keydir", + _type=ConfigDataTypes.STRING, + label="Key Dir", + default="/etc/core/keys", + ), + Configuration( + _id="keyname", + _type=ConfigDataTypes.STRING, + label="Key Name", + default="server", + ), + Configuration( + _id="subnet", + _type=ConfigDataTypes.STRING, + label="Subnet", + default="10.0.200.0", + ), ] - modes: dict[str, dict[str, str]] = {} + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: address = None for iface in self.node.get_ifaces(control=False): ip4 = iface.get_ip4() @@ -56,48 +87,48 @@ class VpnServer(ConfigService): class IPsec(ConfigService): name: str = "IPsec" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["ipsec.sh"] - executables: list[str] = ["racoon", "ip", "setkey", "killall"] - dependencies: list[str] = [] - startup: list[str] = ["bash ipsec.sh"] - validate: list[str] = ["pidof racoon"] - shutdown: list[str] = ["killall racoon"] + directories: List[str] = [] + files: List[str] = ["ipsec.sh"] + executables: List[str] = ["racoon", "ip", "setkey", "killall"] + dependencies: List[str] = [] + startup: List[str] = ["bash ipsec.sh"] + validate: List[str] = ["pidof racoon"] + shutdown: List[str] = ["killall racoon"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} class Firewall(ConfigService): name: str = "Firewall" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["firewall.sh"] - executables: list[str] = ["iptables"] - dependencies: list[str] = [] - startup: list[str] = ["bash firewall.sh"] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = ["firewall.sh"] + executables: List[str] = ["iptables"] + dependencies: List[str] = [] + startup: List[str] = ["bash firewall.sh"] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} class Nat(ConfigService): name: str = "NAT" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["nat.sh"] - executables: list[str] = ["iptables"] - dependencies: list[str] = [] - startup: list[str] = ["bash nat.sh"] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = ["nat.sh"] + executables: List[str] = ["iptables"] + dependencies: List[str] = [] + startup: List[str] = ["bash nat.sh"] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: ifnames = [] for iface in self.node.get_ifaces(control=False): ifnames.append(iface.name) diff --git a/daemon/core/configservices/simpleservice.py b/daemon/core/configservices/simpleservice.py new file mode 100644 index 00000000..c2e7242f --- /dev/null +++ b/daemon/core/configservices/simpleservice.py @@ -0,0 +1,49 @@ +from typing import Dict, List + +from core.config import Configuration +from core.configservice.base import ConfigService, ConfigServiceMode +from core.emulator.enumerations import ConfigDataTypes + + +class SimpleService(ConfigService): + name: str = "Simple" + group: str = "SimpleGroup" + directories: List[str] = ["/etc/quagga", "/usr/local/lib"] + files: List[str] = ["test1.sh", "test2.sh"] + executables: List[str] = [] + dependencies: List[str] = [] + startup: List[str] = [] + validate: List[str] = [] + shutdown: List[str] = [] + validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING + default_configs: List[Configuration] = [ + Configuration(_id="value1", _type=ConfigDataTypes.STRING, label="Text"), + Configuration(_id="value2", _type=ConfigDataTypes.BOOL, label="Boolean"), + Configuration( + _id="value3", + _type=ConfigDataTypes.STRING, + label="Multiple Choice", + options=["value1", "value2", "value3"], + ), + ] + modes: Dict[str, Dict[str, str]] = { + "mode1": {"value1": "value1", "value2": "0", "value3": "value2"}, + "mode2": {"value1": "value2", "value2": "1", "value3": "value3"}, + "mode3": {"value1": "value3", "value2": "0", "value3": "value1"}, + } + + def get_text_template(self, name: str) -> str: + if name == "test1.sh": + return """ + # sample script 1 + # node id(${node.id}) name(${node.name}) + # config: ${config} + echo hello + """ + elif name == "test2.sh": + return """ + # sample script 2 + # node id(${node.id}) name(${node.name}) + # config: ${config} + echo hello2 + """ diff --git a/daemon/core/configservices/utilservices/services.py b/daemon/core/configservices/utilservices/services.py index 73d72060..633da333 100644 --- a/daemon/core/configservices/utilservices/services.py +++ b/daemon/core/configservices/utilservices/services.py @@ -1,4 +1,4 @@ -from typing import Any +from typing import Any, Dict, List import netaddr @@ -12,18 +12,18 @@ GROUP_NAME = "Utility" class DefaultRouteService(ConfigService): name: str = "DefaultRoute" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["defaultroute.sh"] - executables: list[str] = ["ip"] - dependencies: list[str] = [] - startup: list[str] = ["bash defaultroute.sh"] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = ["defaultroute.sh"] + executables: List[str] = ["ip"] + dependencies: List[str] = [] + startup: List[str] = ["bash defaultroute.sh"] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: # only add default routes for linked routing nodes routes = [] ifaces = self.node.get_ifaces() @@ -40,18 +40,18 @@ class DefaultRouteService(ConfigService): class DefaultMulticastRouteService(ConfigService): name: str = "DefaultMulticastRoute" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["defaultmroute.sh"] - executables: list[str] = [] - dependencies: list[str] = [] - startup: list[str] = ["bash defaultmroute.sh"] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = ["defaultmroute.sh"] + executables: List[str] = [] + dependencies: List[str] = [] + startup: List[str] = ["bash defaultmroute.sh"] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: ifname = None for iface in self.node.get_ifaces(control=False): ifname = iface.name @@ -62,18 +62,18 @@ class DefaultMulticastRouteService(ConfigService): class StaticRouteService(ConfigService): name: str = "StaticRoute" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["staticroute.sh"] - executables: list[str] = [] - dependencies: list[str] = [] - startup: list[str] = ["bash staticroute.sh"] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = ["staticroute.sh"] + executables: List[str] = [] + dependencies: List[str] = [] + startup: List[str] = ["bash staticroute.sh"] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: routes = [] for iface in self.node.get_ifaces(control=False): for ip in iface.ips(): @@ -90,18 +90,18 @@ class StaticRouteService(ConfigService): class IpForwardService(ConfigService): name: str = "IPForward" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["ipforward.sh"] - executables: list[str] = ["sysctl"] - dependencies: list[str] = [] - startup: list[str] = ["bash ipforward.sh"] - validate: list[str] = [] - shutdown: list[str] = [] + directories: List[str] = [] + files: List[str] = ["ipforward.sh"] + executables: List[str] = ["sysctl"] + dependencies: List[str] = [] + startup: List[str] = ["bash ipforward.sh"] + validate: List[str] = [] + shutdown: List[str] = [] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: devnames = [] for iface in self.node.get_ifaces(): devname = utils.sysctl_devname(iface.name) @@ -112,18 +112,18 @@ class IpForwardService(ConfigService): class SshService(ConfigService): name: str = "SSH" group: str = GROUP_NAME - directories: list[str] = ["/etc/ssh", "/var/run/sshd"] - files: list[str] = ["startsshd.sh", "/etc/ssh/sshd_config"] - executables: list[str] = ["sshd"] - dependencies: list[str] = [] - startup: list[str] = ["bash startsshd.sh"] - validate: list[str] = [] - shutdown: list[str] = ["killall sshd"] + directories: List[str] = ["/etc/ssh", "/var/run/sshd"] + files: List[str] = ["startsshd.sh", "/etc/ssh/sshd_config"] + executables: List[str] = ["sshd"] + dependencies: List[str] = [] + startup: List[str] = ["bash startsshd.sh"] + validate: List[str] = [] + shutdown: List[str] = ["killall sshd"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: return dict( sshcfgdir=self.directories[0], sshstatedir=self.directories[1], @@ -134,46 +134,44 @@ class SshService(ConfigService): class DhcpService(ConfigService): name: str = "DHCP" group: str = GROUP_NAME - directories: list[str] = ["/etc/dhcp", "/var/lib/dhcp"] - files: list[str] = ["/etc/dhcp/dhcpd.conf"] - executables: list[str] = ["dhcpd"] - dependencies: list[str] = [] - startup: list[str] = ["touch /var/lib/dhcp/dhcpd.leases", "dhcpd"] - validate: list[str] = ["pidof dhcpd"] - shutdown: list[str] = ["killall dhcpd"] + directories: List[str] = ["/etc/dhcp", "/var/lib/dhcp"] + files: List[str] = ["/etc/dhcp/dhcpd.conf"] + executables: List[str] = ["dhcpd"] + dependencies: List[str] = [] + startup: List[str] = ["touch /var/lib/dhcp/dhcpd.leases", "dhcpd"] + validate: List[str] = ["pidof dhcpd"] + shutdown: List[str] = ["killall dhcpd"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: subnets = [] for iface in self.node.get_ifaces(control=False): for ip4 in iface.ip4s: - if ip4.size == 1: - continue # divide the address space in half index = (ip4.size - 2) / 2 rangelow = ip4[index] rangehigh = ip4[-2] - subnets.append((ip4.cidr.ip, ip4.netmask, rangelow, rangehigh, ip4.ip)) + subnets.append((ip4.ip, ip4.netmask, rangelow, rangehigh, str(ip4.ip))) return dict(subnets=subnets) class DhcpClientService(ConfigService): name: str = "DHCPClient" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["startdhcpclient.sh"] - executables: list[str] = ["dhclient"] - dependencies: list[str] = [] - startup: list[str] = ["bash startdhcpclient.sh"] - validate: list[str] = ["pidof dhclient"] - shutdown: list[str] = ["killall dhclient"] + directories: List[str] = [] + files: List[str] = ["startdhcpclient.sh"] + executables: List[str] = ["dhclient"] + dependencies: List[str] = [] + startup: List[str] = ["bash startdhcpclient.sh"] + validate: List[str] = ["pidof dhclient"] + shutdown: List[str] = ["killall dhclient"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: ifnames = [] for iface in self.node.get_ifaces(control=False): ifnames.append(iface.name) @@ -183,56 +181,56 @@ class DhcpClientService(ConfigService): class FtpService(ConfigService): name: str = "FTP" group: str = GROUP_NAME - directories: list[str] = ["/var/run/vsftpd/empty", "/var/ftp"] - files: list[str] = ["vsftpd.conf"] - executables: list[str] = ["vsftpd"] - dependencies: list[str] = [] - startup: list[str] = ["vsftpd ./vsftpd.conf"] - validate: list[str] = ["pidof vsftpd"] - shutdown: list[str] = ["killall vsftpd"] + directories: List[str] = ["/var/run/vsftpd/empty", "/var/ftp"] + files: List[str] = ["vsftpd.conf"] + executables: List[str] = ["vsftpd"] + dependencies: List[str] = [] + startup: List[str] = ["vsftpd ./vsftpd.conf"] + validate: List[str] = ["pidof vsftpd"] + shutdown: List[str] = ["killall vsftpd"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} class PcapService(ConfigService): name: str = "pcap" group: str = GROUP_NAME - directories: list[str] = [] - files: list[str] = ["pcap.sh"] - executables: list[str] = ["tcpdump"] - dependencies: list[str] = [] - startup: list[str] = ["bash pcap.sh start"] - validate: list[str] = ["pidof tcpdump"] - shutdown: list[str] = ["bash pcap.sh stop"] + directories: List[str] = [] + files: List[str] = ["pcap.sh"] + executables: List[str] = ["tcpdump"] + dependencies: List[str] = [] + startup: List[str] = ["bash pcap.sh start"] + validate: List[str] = ["pidof tcpdump"] + shutdown: List[str] = ["bash pcap.sh stop"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: ifnames = [] for iface in self.node.get_ifaces(control=False): ifnames.append(iface.name) - return dict(ifnames=ifnames) + return dict() class RadvdService(ConfigService): name: str = "radvd" group: str = GROUP_NAME - directories: list[str] = ["/etc/radvd", "/var/run/radvd"] - files: list[str] = ["/etc/radvd/radvd.conf"] - executables: list[str] = ["radvd"] - dependencies: list[str] = [] - startup: list[str] = [ + directories: List[str] = ["/etc/radvd", "/var/run/radvd"] + files: List[str] = ["/etc/radvd/radvd.conf"] + executables: List[str] = ["radvd"] + dependencies: List[str] = [] + startup: List[str] = [ "radvd -C /etc/radvd/radvd.conf -m logfile -l /var/log/radvd.log" ] - validate: list[str] = ["pidof radvd"] - shutdown: list[str] = ["pkill radvd"] + validate: List[str] = ["pidof radvd"] + shutdown: List[str] = ["pkill radvd"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: ifaces = [] for iface in self.node.get_ifaces(control=False): prefixes = [] @@ -247,22 +245,22 @@ class RadvdService(ConfigService): class AtdService(ConfigService): name: str = "atd" group: str = GROUP_NAME - directories: list[str] = ["/var/spool/cron/atjobs", "/var/spool/cron/atspool"] - files: list[str] = ["startatd.sh"] - executables: list[str] = ["atd"] - dependencies: list[str] = [] - startup: list[str] = ["bash startatd.sh"] - validate: list[str] = ["pidof atd"] - shutdown: list[str] = ["pkill atd"] + directories: List[str] = ["/var/spool/cron/atjobs", "/var/spool/cron/atspool"] + files: List[str] = ["startatd.sh"] + executables: List[str] = ["atd"] + dependencies: List[str] = [] + startup: List[str] = ["bash startatd.sh"] + validate: List[str] = ["pidof atd"] + shutdown: List[str] = ["pkill atd"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} class HttpService(ConfigService): name: str = "HTTP" group: str = GROUP_NAME - directories: list[str] = [ + directories: List[str] = [ "/etc/apache2", "/var/run/apache2", "/var/log/apache2", @@ -270,21 +268,21 @@ class HttpService(ConfigService): "/var/lock/apache2", "/var/www", ] - files: list[str] = [ + files: List[str] = [ "/etc/apache2/apache2.conf", "/etc/apache2/envvars", "/var/www/index.html", ] - executables: list[str] = ["apache2ctl"] - dependencies: list[str] = [] - startup: list[str] = ["chown www-data /var/lock/apache2", "apache2ctl start"] - validate: list[str] = ["pidof apache2"] - shutdown: list[str] = ["apache2ctl stop"] + executables: List[str] = ["apache2ctl"] + dependencies: List[str] = [] + startup: List[str] = ["chown www-data /var/lock/apache2", "apache2ctl start"] + validate: List[str] = ["pidof apache2"] + shutdown: List[str] = ["apache2ctl stop"] validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - default_configs: list[Configuration] = [] - modes: dict[str, dict[str, str]] = {} + default_configs: List[Configuration] = [] + modes: Dict[str, Dict[str, str]] = {} - def data(self) -> dict[str, Any]: + def data(self) -> Dict[str, Any]: ifaces = [] for iface in self.node.get_ifaces(control=False): ifaces.append(iface) diff --git a/daemon/core/configservices/utilservices/templates/etc/apache2/apache2.conf b/daemon/core/configservices/utilservices/templates/apache2.conf similarity index 100% rename from daemon/core/configservices/utilservices/templates/etc/apache2/apache2.conf rename to daemon/core/configservices/utilservices/templates/apache2.conf diff --git a/daemon/core/configservices/utilservices/templates/etc/dhcp/dhcpd.conf b/daemon/core/configservices/utilservices/templates/dhcpd.conf similarity index 100% rename from daemon/core/configservices/utilservices/templates/etc/dhcp/dhcpd.conf rename to daemon/core/configservices/utilservices/templates/dhcpd.conf diff --git a/daemon/core/configservices/utilservices/templates/etc/apache2/envvars b/daemon/core/configservices/utilservices/templates/envvars similarity index 100% rename from daemon/core/configservices/utilservices/templates/etc/apache2/envvars rename to daemon/core/configservices/utilservices/templates/envvars diff --git a/daemon/core/configservices/utilservices/templates/var/www/index.html b/daemon/core/configservices/utilservices/templates/index.html similarity index 100% rename from daemon/core/configservices/utilservices/templates/var/www/index.html rename to daemon/core/configservices/utilservices/templates/index.html diff --git a/daemon/core/configservices/utilservices/templates/ipforward.sh b/daemon/core/configservices/utilservices/templates/ipforward.sh index 75717ecf..a8d3abed 100644 --- a/daemon/core/configservices/utilservices/templates/ipforward.sh +++ b/daemon/core/configservices/utilservices/templates/ipforward.sh @@ -13,5 +13,4 @@ sysctl -w net.ipv4.conf.default.rp_filter=0 sysctl -w net.ipv4.conf.${devname}.forwarding=1 sysctl -w net.ipv4.conf.${devname}.send_redirects=0 sysctl -w net.ipv4.conf.${devname}.rp_filter=0 -sysctl -w net.ipv6.conf.${devname}.forwarding=1 % endfor diff --git a/daemon/core/configservices/utilservices/templates/pcap.sh b/daemon/core/configservices/utilservices/templates/pcap.sh index d4a0ea9f..6a099f8c 100644 --- a/daemon/core/configservices/utilservices/templates/pcap.sh +++ b/daemon/core/configservices/utilservices/templates/pcap.sh @@ -3,7 +3,7 @@ # (-s snap length, -C limit pcap file length, -n disable name resolution) if [ "x$1" = "xstart" ]; then % for ifname in ifnames: - tcpdump -s 12288 -C 10 -n -w ${node.name}.${ifname}.pcap -i ${ifname} > /dev/null 2>&1 & + tcpdump -s 12288 -C 10 -n -w ${node.name}.${ifname}.pcap -i ${ifname} < /dev/null & % endfor elif [ "x$1" = "xstop" ]; then mkdir -p $SESSION_DIR/pcap diff --git a/daemon/core/configservices/utilservices/templates/etc/radvd/radvd.conf b/daemon/core/configservices/utilservices/templates/radvd.conf similarity index 92% rename from daemon/core/configservices/utilservices/templates/etc/radvd/radvd.conf rename to daemon/core/configservices/utilservices/templates/radvd.conf index d003b4b1..1436f068 100644 --- a/daemon/core/configservices/utilservices/templates/etc/radvd/radvd.conf +++ b/daemon/core/configservices/utilservices/templates/radvd.conf @@ -1,5 +1,5 @@ # auto-generated by RADVD service (utility.py) -% for ifname, prefixes in ifaces: +% for ifname, prefixes in values: interface ${ifname} { AdvSendAdvert on; diff --git a/daemon/core/configservices/utilservices/templates/etc/ssh/sshd_config b/daemon/core/configservices/utilservices/templates/sshd_config similarity index 100% rename from daemon/core/configservices/utilservices/templates/etc/ssh/sshd_config rename to daemon/core/configservices/utilservices/templates/sshd_config diff --git a/daemon/core/constants.py.in b/daemon/core/constants.py.in index 1ade8287..cb566e40 100644 --- a/daemon/core/constants.py.in +++ b/daemon/core/constants.py.in @@ -1,5 +1,3 @@ -from pathlib import Path - -COREDPY_VERSION: str = "@PACKAGE_VERSION@" -CORE_CONF_DIR: Path = Path("@CORE_CONF_DIR@") -CORE_DATA_DIR: Path = Path("@CORE_DATA_DIR@") +COREDPY_VERSION = "@PACKAGE_VERSION@" +CORE_CONF_DIR = "@CORE_CONF_DIR@" +CORE_DATA_DIR = "@CORE_DATA_DIR@" diff --git a/daemon/core/emane/models/bypass.py b/daemon/core/emane/bypass.py similarity index 51% rename from daemon/core/emane/models/bypass.py rename to daemon/core/emane/bypass.py index e8f2ed39..8aabc3f9 100644 --- a/daemon/core/emane/models/bypass.py +++ b/daemon/core/emane/bypass.py @@ -1,23 +1,25 @@ """ EMANE Bypass model for CORE """ -from pathlib import Path +from typing import List, Set -from core.config import ConfigBool, Configuration +from core.config import Configuration from core.emane import emanemodel +from core.emulator.enumerations import ConfigDataTypes class EmaneBypassModel(emanemodel.EmaneModel): name: str = "emane_bypass" # values to ignore, when writing xml files - config_ignore: set[str] = {"none"} + config_ignore: Set[str] = {"none"} # mac definitions mac_library: str = "bypassmaclayer" - mac_config: list[Configuration] = [ - ConfigBool( - id="none", + mac_config: List[Configuration] = [ + Configuration( + _id="none", + _type=ConfigDataTypes.BOOL, default="0", label="There are no parameters for the bypass model.", ) @@ -25,8 +27,9 @@ class EmaneBypassModel(emanemodel.EmaneModel): # phy definitions phy_library: str = "bypassphylayer" - phy_config: list[Configuration] = [] + phy_config: List[Configuration] = [] @classmethod - def load(cls, emane_prefix: Path) -> None: - cls._load_platform_config(emane_prefix) + def load(cls, emane_prefix: str) -> None: + # ignore default logic + pass diff --git a/daemon/core/emane/models/commeffect.py b/daemon/core/emane/commeffect.py similarity index 76% rename from daemon/core/emane/models/commeffect.py rename to daemon/core/emane/commeffect.py index aa093a93..13ec53f7 100644 --- a/daemon/core/emane/models/commeffect.py +++ b/daemon/core/emane/commeffect.py @@ -3,7 +3,8 @@ commeffect.py: EMANE CommEffect model for CORE """ import logging -from pathlib import Path +import os +from typing import Dict, List from lxml import etree @@ -13,8 +14,6 @@ from core.emulator.data import LinkOptions from core.nodes.interface import CoreInterface from core.xml import emanexml -logger = logging.getLogger(__name__) - try: from emane.events.commeffectevent import CommEffectEvent except ImportError: @@ -22,7 +21,7 @@ except ImportError: from emanesh.events.commeffectevent import CommEffectEvent except ImportError: CommEffectEvent = None - logger.debug("compatible emane python bindings not installed") + logging.debug("compatible emane python bindings not installed") def convert_none(x: float) -> int: @@ -41,36 +40,27 @@ class EmaneCommEffectModel(emanemodel.EmaneModel): name: str = "emane_commeffect" shim_library: str = "commeffectshim" shim_xml: str = "commeffectshim.xml" - shim_defaults: dict[str, str] = {} - config_shim: list[Configuration] = [] + shim_defaults: Dict[str, str] = {} + config_shim: List[Configuration] = [] # comm effect does not need the default phy and external configurations - phy_config: list[Configuration] = [] - external_config: list[Configuration] = [] + phy_config: List[Configuration] = [] + external_config: List[Configuration] = [] @classmethod - def load(cls, emane_prefix: Path) -> None: - cls._load_platform_config(emane_prefix) - shim_xml_path = emane_prefix / "share/emane/manifest" / cls.shim_xml + def load(cls, emane_prefix: str) -> None: + shim_xml_path = os.path.join(emane_prefix, "share/emane/manifest", cls.shim_xml) cls.config_shim = emanemanifest.parse(shim_xml_path, cls.shim_defaults) @classmethod - def configurations(cls) -> list[Configuration]: - return cls.platform_config + cls.config_shim + def configurations(cls) -> List[Configuration]: + return cls.config_shim @classmethod - def config_groups(cls) -> list[ConfigGroup]: - platform_len = len(cls.platform_config) - return [ - ConfigGroup("Platform Parameters", 1, platform_len), - ConfigGroup( - "CommEffect SHIM Parameters", - platform_len + 1, - len(cls.configurations()), - ), - ] + def config_groups(cls) -> List[ConfigGroup]: + return [ConfigGroup("CommEffect SHIM Parameters", 1, len(cls.configurations()))] - def build_xml_files(self, config: dict[str, str], iface: CoreInterface) -> None: + def build_xml_files(self, config: Dict[str, str], iface: CoreInterface) -> None: """ Build the necessary nem and commeffect XMLs in the given path. If an individual NEM has a nonstandard config, we need to build @@ -121,15 +111,21 @@ class EmaneCommEffectModel(emanemodel.EmaneModel): Generate CommEffect events when a Link Message is received having link parameters. """ - if iface is None or iface2 is None: - logger.warning("%s: missing NEM information", self.name) + service = self.session.emane.service + if service is None: + logging.warning("%s: EMANE event service unavailable", self.name) return + + if iface is None or iface2 is None: + logging.warning("%s: missing NEM information", self.name) + return + # TODO: batch these into multiple events per transmission # TODO: may want to split out seconds portion of delay and jitter event = CommEffectEvent() nem1 = self.session.emane.get_nem_id(iface) nem2 = self.session.emane.get_nem_id(iface2) - logger.info("sending comm effect event") + logging.info("sending comm effect event") event.append( nem1, latency=convert_none(options.delay), @@ -139,4 +135,4 @@ class EmaneCommEffectModel(emanemodel.EmaneModel): unicast=int(convert_none(options.bandwidth)), broadcast=int(convert_none(options.bandwidth)), ) - self.session.emane.publish_event(nem2, event) + service.publish(nem2, event) diff --git a/daemon/core/emane/emanemanager.py b/daemon/core/emane/emanemanager.py index c02570c9..6ae66b93 100644 --- a/daemon/core/emane/emanemanager.py +++ b/daemon/core/emane/emanemanager.py @@ -1,51 +1,66 @@ """ -Implements configuration and control of an EMANE emulation. +emane.py: definition of an Emane class for implementing configuration control of an EMANE emulation. """ import logging import os import threading +from collections import OrderedDict +from dataclasses import dataclass, field from enum import Enum -from typing import TYPE_CHECKING, Optional, Union +from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple, Type from core import utils +from core.config import ConfigGroup, Configuration, ModelManager +from core.emane import emanemanifest +from core.emane.bypass import EmaneBypassModel +from core.emane.commeffect import EmaneCommEffectModel from core.emane.emanemodel import EmaneModel +from core.emane.ieee80211abg import EmaneIeee80211abgModel from core.emane.linkmonitor import EmaneLinkMonitor -from core.emane.modelmanager import EmaneModelManager -from core.emane.nodes import EmaneNet, TunTap +from core.emane.nodes import EmaneNet +from core.emane.rfpipe import EmaneRfPipeModel +from core.emane.tdma import EmaneTdmaModel from core.emulator.data import LinkData -from core.emulator.enumerations import LinkTypes, MessageFlags, RegisterTlvs +from core.emulator.enumerations import ( + ConfigDataTypes, + LinkTypes, + MessageFlags, + RegisterTlvs, +) from core.errors import CoreCommandError, CoreError -from core.nodes.base import CoreNode, NodeBase -from core.nodes.interface import CoreInterface +from core.nodes.base import CoreNetworkBase, CoreNode, CoreNodeBase, NodeBase +from core.nodes.interface import CoreInterface, TunTap from core.xml import emanexml -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emulator.session import Session try: - from emane.events import EventService, PathlossEvent, CommEffectEvent, LocationEvent + from emane.events import EventService, PathlossEvent + from emane.events import LocationEvent from emane.events.eventserviceexception import EventServiceException except ImportError: try: - from emanesh.events import ( - EventService, - PathlossEvent, - CommEffectEvent, - LocationEvent, - ) + from emanesh.events import EventService + from emanesh.events import LocationEvent from emanesh.events.eventserviceexception import EventServiceException except ImportError: - CommEffectEvent = None EventService = None LocationEvent = None PathlossEvent = None EventServiceException = None - logger.debug("compatible emane python bindings not installed") + logging.debug("compatible emane python bindings not installed") -DEFAULT_LOG_LEVEL: int = 3 +EMANE_MODELS = [ + EmaneRfPipeModel, + EmaneIeee80211abgModel, + EmaneCommEffectModel, + EmaneBypassModel, + EmaneTdmaModel, +] +DEFAULT_EMANE_PREFIX = "/usr" +DEFAULT_DEV = "ctrl0" class EmaneState(Enum): @@ -54,60 +69,13 @@ class EmaneState(Enum): NOT_READY = 2 -class EmaneEventService: - def __init__( - self, manager: "EmaneManager", device: str, group: str, port: int - ) -> None: - self.manager: "EmaneManager" = manager - self.device: str = device - self.group: str = group - self.port: int = port - self.running: bool = False - self.thread: Optional[threading.Thread] = None - logger.info("starting emane event service %s %s:%s", device, group, port) - self.events: EventService = EventService( - eventchannel=(group, port, device), otachannel=None - ) - - def start(self) -> None: - self.running = True - self.thread = threading.Thread(target=self.run, daemon=True) - self.thread.start() - - def run(self) -> None: - """ - Run and monitor events. - """ - logger.info("subscribing to emane location events") - while self.running: - _uuid, _seq, events = self.events.nextEvent() - # this occurs with 0.9.1 event service - if not self.running: - break - for event in events: - nem, eid, data = event - if eid == LocationEvent.IDENTIFIER: - self.manager.handlelocationevent(nem, eid, data) - logger.info("unsubscribing from emane location events") - - def stop(self) -> None: - """ - Stop service and monitoring events. - """ - self.events.breakloop() - self.running = False - if self.thread: - self.thread.join() - self.thread = None - for fd in self.events._readFd, self.events._writeFd: - if fd >= 0: - os.close(fd) - for f in self.events._socket, self.events._socketOTA: - if f: - f.close() +@dataclass +class StartData: + node: CoreNodeBase + ifaces: List[CoreInterface] = field(default_factory=list) -class EmaneManager: +class EmaneManager(ModelManager): """ EMANE controller object. Lives in a Session instance and is used for building EMANE config files for all EMANE networks in this emulation, and for @@ -116,6 +84,9 @@ class EmaneManager: name: str = "emane" config_type: RegisterTlvs = RegisterTlvs.EMULATION_SERVER + NOT_READY: int = 2 + EVENTCFGVAR: str = "LIBEMANEEVENTSERVICECONFIG" + DEFAULT_LOG_LEVEL: int = 3 def __init__(self, session: "Session") -> None: """ @@ -126,92 +97,41 @@ class EmaneManager: """ super().__init__() self.session: "Session" = session - self.nems_to_ifaces: dict[int, CoreInterface] = {} - self.ifaces_to_nems: dict[CoreInterface, int] = {} - self._emane_nets: dict[int, EmaneNet] = {} + self.nems_to_ifaces: Dict[int, CoreInterface] = {} + self.ifaces_to_nems: Dict[CoreInterface, int] = {} + self._emane_nets: Dict[int, EmaneNet] = {} self._emane_node_lock: threading.Lock = threading.Lock() # port numbers are allocated from these counters - self.platformport: int = self.session.options.get_int( + self.platformport: int = self.session.options.get_config_int( "emane_platform_port", 8100 ) - self.transformport: int = self.session.options.get_int( + self.transformport: int = self.session.options.get_config_int( "emane_transform_port", 8200 ) self.doeventloop: bool = False self.eventmonthread: Optional[threading.Thread] = None # model for global EMANE configuration options - self.node_configs: dict[int, dict[str, dict[str, str]]] = {} - self.node_models: dict[int, str] = {} + self.emane_config: EmaneGlobalModel = EmaneGlobalModel(session) + self.set_configs(self.emane_config.default_values()) # link monitor self.link_monitor: EmaneLinkMonitor = EmaneLinkMonitor(self) - # emane event monitoring - self.services: dict[str, EmaneEventService] = {} - self.nem_service: dict[int, EmaneEventService] = {} - def next_nem_id(self, iface: CoreInterface) -> int: - nem_id = self.session.options.get_int("nem_id_start") + self.service: Optional[EventService] = None + self.eventchannel: Optional[Tuple[str, int, str]] = None + self.event_device: Optional[str] = None + self.emane_check() + + def next_nem_id(self) -> int: + nem_id = int(self.get_config("nem_id_start")) while nem_id in self.nems_to_ifaces: nem_id += 1 - self.nems_to_ifaces[nem_id] = iface - self.ifaces_to_nems[iface] = nem_id - self.write_nem(iface, nem_id) return nem_id - def get_config( - self, key: int, model: str, default: bool = True - ) -> Optional[dict[str, str]]: - """ - Get the current or default configuration for an emane model. - - :param key: key to get configuration for - :param model: emane model to get configuration for - :param default: True to return default configuration when none exists, False - otherwise - :return: emane model configuration - :raises CoreError: when model does not exist - """ - model_class = self.get_model(model) - model_configs = self.node_configs.get(key) - config = None - if model_configs: - config = model_configs.get(model) - if config is None and default: - config = model_class.default_values() - return config - - def set_config(self, key: int, model: str, config: dict[str, str] = None) -> None: - """ - Sets and update the provided configuration against the default model - or currently set emane model configuration. - - :param key: configuration key to set - :param model: model to set configuration for - :param config: configuration to update current configuration with - :return: nothing - :raises CoreError: when model does not exist - """ - self.get_model(model) - model_config = self.get_config(key, model) - config = config if config else {} - model_config.update(config) - model_configs = self.node_configs.setdefault(key, {}) - model_configs[model] = model_config - - def get_model(self, model_name: str) -> type[EmaneModel]: - """ - Convenience method for getting globally loaded emane models. - - :param model_name: name of model to retrieve - :return: emane model class - :raises CoreError: when model does not exist - """ - return EmaneModelManager.get(model_name) - def get_iface_config( self, emane_net: EmaneNet, iface: CoreInterface - ) -> dict[str, str]: + ) -> Dict[str, str]: """ Retrieve configuration for a given interface, first checking for interface specific config, node specific config, network specific config, and finally @@ -221,30 +141,113 @@ class EmaneManager: :param iface: interface running emane :return: net, node, or interface model configuration """ - model_name = emane_net.wireless_model.name + model_name = emane_net.model.name + config = None # try to retrieve interface specific configuration - key = utils.iface_config_id(iface.node.id, iface.id) - config = self.get_config(key, model_name, default=False) + if iface.node_id is not None: + key = utils.iface_config_id(iface.node.id, iface.node_id) + config = self.get_configs(node_id=key, config_type=model_name) # attempt to retrieve node specific config, when iface config is not present if not config: - config = self.get_config(iface.node.id, model_name, default=False) + config = self.get_configs(node_id=iface.node.id, config_type=model_name) # attempt to get emane net specific config, when node config is not present if not config: # with EMANE 0.9.2+, we need an extra NEM XML from # model.buildnemxmlfiles(), so defaults are returned here - config = self.get_config(emane_net.id, model_name, default=False) + config = self.get_configs(node_id=emane_net.id, config_type=model_name) # return default config values, when a config is not present if not config: - config = emane_net.wireless_model.default_values() + config = emane_net.model.default_values() return config def config_reset(self, node_id: int = None) -> None: - if node_id is None: - self.node_configs.clear() - self.node_models.clear() - else: - self.node_configs.get(node_id, {}).clear() - self.node_models.pop(node_id, None) + super().config_reset(node_id) + self.set_configs(self.emane_config.default_values()) + + def emane_check(self) -> None: + """ + Check if emane is installed and load models. + + :return: nothing + """ + # check for emane + path = utils.which("emane", required=False) + if not path: + logging.info("emane is not installed") + return + + # get version + emane_version = utils.cmd("emane --version") + logging.info("using emane: %s", emane_version) + + # load default emane models + self.load_models(EMANE_MODELS) + + # load custom models + custom_models_path = self.session.options.get_config("emane_models_dir") + if custom_models_path: + emane_models = utils.load_classes(custom_models_path, EmaneModel) + self.load_models(emane_models) + + def deleteeventservice(self) -> None: + if self.service: + for fd in self.service._readFd, self.service._writeFd: + if fd >= 0: + os.close(fd) + for f in self.service._socket, self.service._socketOTA: + if f: + f.close() + self.service = None + self.event_device = None + + def initeventservice(self, filename: str = None, shutdown: bool = False) -> None: + """ + Re-initialize the EMANE Event service. + The multicast group and/or port may be configured. + """ + self.deleteeventservice() + + if shutdown: + return + + # Get the control network to be used for events + group, port = self.get_config("eventservicegroup").split(":") + self.event_device = self.get_config("eventservicedevice") + eventnetidx = self.session.get_control_net_index(self.event_device) + if eventnetidx < 0: + logging.error( + "invalid emane event service device provided: %s", self.event_device + ) + return + + # make sure the event control network is in place + eventnet = self.session.add_remove_control_net( + net_index=eventnetidx, remove=False, conf_required=False + ) + if eventnet is not None: + # direct EMANE events towards control net bridge + self.event_device = eventnet.brname + self.eventchannel = (group, int(port), self.event_device) + + # disabled otachannel for event service + # only needed for e.g. antennaprofile events xmit by models + logging.info("using %s for event service traffic", self.event_device) + try: + self.service = EventService(eventchannel=self.eventchannel, otachannel=None) + except EventServiceException: + logging.exception("error instantiating emane EventService") + + def load_models(self, emane_models: List[Type[EmaneModel]]) -> None: + """ + Load EMANE models and make them available. + """ + for emane_model in emane_models: + logging.debug("loading emane model: %s", emane_model.__name__) + emane_prefix = self.session.options.get_config( + "emane_prefix", default=DEFAULT_EMANE_PREFIX + ) + emane_model.load(emane_prefix) + self.models[emane_model.name] = emane_model def add_node(self, emane_net: EmaneNet) -> None: """ @@ -260,7 +263,7 @@ class EmaneManager: ) self._emane_nets[emane_net.id] = emane_net - def getnodes(self) -> set[CoreNode]: + def getnodes(self) -> Set[CoreNode]: """ Return a set of CoreNodes that are linked to an EMANE network, e.g. containers having one or more radio interfaces. @@ -268,8 +271,7 @@ class EmaneManager: nodes = set() for emane_net in self._emane_nets.values(): for iface in emane_net.get_ifaces(): - if isinstance(iface.node, CoreNode): - nodes.add(iface.node) + nodes.add(iface.node) return nodes def setup(self) -> EmaneState: @@ -279,21 +281,53 @@ class EmaneManager: :return: SUCCESS, NOT_NEEDED, NOT_READY in order to delay session instantiation """ - logger.debug("emane setup") + logging.debug("emane setup") with self.session.nodes_lock: for node_id in self.session.nodes: node = self.session.nodes[node_id] if isinstance(node, EmaneNet): - logger.debug( + logging.debug( "adding emane node: id(%s) name(%s)", node.id, node.name ) self.add_node(node) if not self._emane_nets: - logger.debug("no emane nodes in session") + logging.debug("no emane nodes in session") return EmaneState.NOT_NEEDED + # check if bindings were installed if EventService is None: raise CoreError("EMANE python bindings are not installed") + + # control network bridge required for EMANE 0.9.2 + # - needs to exist when eventservice binds to it (initeventservice) + otadev = self.get_config("otamanagerdevice") + netidx = self.session.get_control_net_index(otadev) + logging.debug("emane ota manager device: index(%s) otadev(%s)", netidx, otadev) + if netidx < 0: + logging.error( + "EMANE cannot start, check core config. invalid OTA device provided: %s", + otadev, + ) + return EmaneState.NOT_READY + + self.session.add_remove_control_net( + net_index=netidx, remove=False, conf_required=False + ) + eventdev = self.get_config("eventservicedevice") + logging.debug("emane event service device: eventdev(%s)", eventdev) + if eventdev != otadev: + netidx = self.session.get_control_net_index(eventdev) + logging.debug("emane event service device index: %s", netidx) + if netidx < 0: + logging.error( + "emane cannot start due to invalid event service device: %s", + eventdev, + ) + return EmaneState.NOT_READY + + self.session.add_remove_control_net( + net_index=netidx, remove=False, conf_required=False + ) self.check_node_models() return EmaneState.SUCCESS @@ -309,103 +343,53 @@ class EmaneManager: status = self.setup() if status != EmaneState.SUCCESS: return status - self.startup_nodes() + self.starteventmonitor() + self.buildeventservicexml() + with self._emane_node_lock: + logging.info("emane building xmls...") + start_data = self.get_start_data() + for data in start_data: + self.start_node(data) if self.links_enabled(): self.link_monitor.start() return EmaneState.SUCCESS - def startup_nodes(self) -> None: - with self._emane_node_lock: - logger.info("emane building xmls...") - for emane_net, iface in self.get_ifaces(): - self.start_iface(emane_net, iface) - - def start_iface(self, emane_net: EmaneNet, iface: TunTap) -> None: - nem_id = self.next_nem_id(iface) - nem_port = self.get_nem_port(iface) - logger.info( - "starting emane for node(%s) iface(%s) nem(%s)", - iface.node.name, - iface.name, - nem_id, - ) - config = self.get_iface_config(emane_net, iface) - self.setup_control_channels(nem_id, iface, config) - emanexml.build_platform_xml(nem_id, nem_port, emane_net, iface, config) - self.start_daemon(iface) - self.install_iface(iface, config) - - def get_ifaces(self) -> list[tuple[EmaneNet, TunTap]]: - ifaces = [] - for emane_net in self._emane_nets.values(): - if not emane_net.wireless_model: - logger.error("emane net(%s) has no model", emane_net.name) + def get_start_data(self) -> List[StartData]: + node_map = {} + for node_id in sorted(self._emane_nets): + emane_net = self._emane_nets[node_id] + if not emane_net.model: + logging.error("emane net(%s) has no model", emane_net.name) continue for iface in emane_net.get_ifaces(): if not iface.node: - logger.error( + logging.error( "emane net(%s) connected interface(%s) missing node", emane_net.name, iface.name, ) continue - if isinstance(iface, TunTap): - ifaces.append((emane_net, iface)) - return sorted(ifaces, key=lambda x: (x[1].node.id, x[1].id)) + start_node = node_map.setdefault(iface.node, StartData(iface.node)) + start_node.ifaces.append(iface) + start_nodes = sorted(node_map.values(), key=lambda x: x.node.id) + for start_node in start_nodes: + start_node.ifaces = sorted(start_node.ifaces, key=lambda x: x.node_id) + return start_nodes - def setup_control_channels( - self, nem_id: int, iface: CoreInterface, config: dict[str, str] - ) -> None: - node = iface.node - # setup ota device - otagroup, _otaport = config["otamanagergroup"].split(":") - otadev = config["otamanagerdevice"] - ota_index = self.session.get_control_net_index(otadev) - self.session.add_remove_control_net(ota_index, conf_required=False) - if isinstance(node, CoreNode): - self.session.add_remove_control_iface(node, ota_index, conf_required=False) - # setup event device - eventgroup, eventport = config["eventservicegroup"].split(":") - eventdev = config["eventservicedevice"] - event_index = self.session.get_control_net_index(eventdev) - event_net = self.session.add_remove_control_net( - event_index, conf_required=False + def start_node(self, data: StartData) -> None: + control_net = self.session.add_remove_control_net( + 0, remove=False, conf_required=False ) - if isinstance(node, CoreNode): - self.session.add_remove_control_iface( - node, event_index, conf_required=False - ) - # initialize emane event services - service = self.services.get(event_net.brname) - if not service: - try: - service = EmaneEventService( - self, event_net.brname, eventgroup, int(eventport) - ) - if self.doeventmonitor(): - service.start() - self.services[event_net.brname] = service - self.nem_service[nem_id] = service - except EventServiceException: - raise CoreError( - "failed to start emane event services " - f"{event_net.brname} {eventgroup}:{eventport}" - ) - else: - self.nem_service[nem_id] = service - # setup multicast routes as needed - logger.info( - "node(%s) interface(%s) ota(%s:%s) event(%s:%s)", - node.name, - iface.name, - otagroup, - otadev, - eventgroup, - eventdev, - ) - node.node_net_client.create_route(otagroup, otadev) - if eventgroup != otagroup: - node.node_net_client.create_route(eventgroup, eventdev) + emanexml.build_platform_xml(self, control_net, data) + self.start_daemon(data.node) + for iface in data.ifaces: + self.install_iface(iface) + + def set_nem(self, nem_id: int, iface: CoreInterface) -> None: + if nem_id in self.nems_to_ifaces: + raise CoreError(f"adding duplicate nem: {nem_id}") + self.nems_to_ifaces[nem_id] = iface + self.ifaces_to_nems[iface] = nem_id def get_iface(self, nem_id: int) -> Optional[CoreInterface]: return self.nems_to_ifaces.get(nem_id) @@ -413,94 +397,32 @@ class EmaneManager: def get_nem_id(self, iface: CoreInterface) -> Optional[int]: return self.ifaces_to_nems.get(iface) - def get_nem_port(self, iface: CoreInterface) -> int: - nem_id = self.get_nem_id(iface) - return int(f"47{nem_id:03}") - - def get_nem_position( - self, iface: CoreInterface - ) -> Optional[tuple[int, float, float, int]]: - """ - Retrieves nem position for a given interface. - - :param iface: interface to get nem emane position for - :return: nem position tuple, None otherwise - """ - nem_id = self.get_nem_id(iface) - if nem_id is None: - logger.info("nem for %s is unknown", iface.localname) - return - node = iface.node - x, y, z = node.getposition() - lat, lon, alt = self.session.location.getgeo(x, y, z) - if node.position.alt is not None: - alt = node.position.alt - node.position.set_geo(lon, lat, alt) - # altitude must be an integer or warning is printed - alt = int(round(alt)) - return nem_id, lon, lat, alt - - def set_nem_position(self, iface: CoreInterface) -> None: - """ - Publish a NEM location change event using the EMANE event service. - - :param iface: interface to set nem position for - """ - position = self.get_nem_position(iface) - if position: - nemid, lon, lat, alt = position - event = LocationEvent() - event.append(nemid, latitude=lat, longitude=lon, altitude=alt) - self.publish_event(nemid, event, send_all=True) - - def set_nem_positions(self, moved_ifaces: list[CoreInterface]) -> None: - """ - Several NEMs have moved, from e.g. a WaypointMobilityModel - calculation. Generate an EMANE Location Event having several - entries for each interface that has moved. - """ - if not moved_ifaces: - return - services = {} - for iface in moved_ifaces: - position = self.get_nem_position(iface) - if not position: - continue - nem_id, lon, lat, alt = position - service = self.nem_service.get(nem_id) - if not service: - continue - event = services.setdefault(service, LocationEvent()) - event.append(nem_id, latitude=lat, longitude=lon, altitude=alt) - for service, event in services.items(): - service.events.publish(0, event) - def write_nem(self, iface: CoreInterface, nem_id: int) -> None: - path = self.session.directory / "emane_nems" + path = os.path.join(self.session.session_dir, "emane_nems") try: - with path.open("a") as f: + with open(path, "a") as f: f.write(f"{iface.node.name} {iface.name} {nem_id}\n") - except OSError: - logger.exception("error writing to emane nem file") + except IOError: + logging.exception("error writing to emane nem file") def links_enabled(self) -> bool: - return self.session.options.get_int("link_enabled") == 1 + return self.get_config("link_enabled") == "1" def poststartup(self) -> None: """ Retransmit location events now that all NEMs are active. """ - events_enabled = self.genlocationevents() + if not self.genlocationevents(): + return with self._emane_node_lock: for node_id in sorted(self._emane_nets): emane_net = self._emane_nets[node_id] - logger.debug( + logging.debug( "post startup for emane node: %s - %s", emane_net.id, emane_net.name ) + emane_net.model.post_startup() for iface in emane_net.get_ifaces(): - emane_net.wireless_model.post_startup(iface) - if events_enabled: - iface.setposition() + iface.setposition() def reset(self) -> None: """ @@ -511,8 +433,6 @@ class EmaneManager: self._emane_nets.clear() self.nems_to_ifaces.clear() self.ifaces_to_nems.clear() - self.nems_to_ifaces.clear() - self.services.clear() def shutdown(self) -> None: """ @@ -521,26 +441,25 @@ class EmaneManager: with self._emane_node_lock: if not self._emane_nets: return - logger.info("stopping EMANE daemons") + logging.info("stopping EMANE daemons") if self.links_enabled(): self.link_monitor.stop() - # shutdown interfaces - for _, iface in self.get_ifaces(): - node = iface.node + # shutdown interfaces and stop daemons + kill_emaned = "killall -q emane" + start_data = self.get_start_data() + for data in start_data: + node = data.node if not node.up: continue - kill_cmd = f'pkill -f "emane.+{iface.name}"' + for iface in data.ifaces: + if isinstance(node, CoreNode): + iface.shutdown() + iface.poshook = None if isinstance(node, CoreNode): - iface.shutdown() - node.cmd(kill_cmd, wait=False) + node.cmd(kill_emaned, wait=False) else: - node.host_cmd(kill_cmd, wait=False) - iface.poshook = None - # stop emane event services - while self.services: - _, service = self.services.popitem() - service.stop() - self.nem_service.clear() + node.host_cmd(kill_emaned, wait=False) + self.stopeventmonitor() def check_node_models(self) -> None: """ @@ -548,25 +467,25 @@ class EmaneManager: """ for node_id in self._emane_nets: emane_net = self._emane_nets[node_id] - logger.debug("checking emane model for node: %s", node_id) + logging.debug("checking emane model for node: %s", node_id) + # skip nodes that already have a model set - if emane_net.wireless_model: - logger.debug( - "node(%s) already has model(%s)", - emane_net.id, - emane_net.wireless_model.name, + if emane_net.model: + logging.debug( + "node(%s) already has model(%s)", emane_net.id, emane_net.model.name ) continue + # set model configured for node, due to legacy messaging configuration # before nodes exist model_name = self.node_models.get(node_id) if not model_name: - logger.error("emane node(%s) has no node model", node_id) + logging.error("emane node(%s) has no node model", node_id) raise ValueError("emane node has no model set") - config = self.get_config(node_id, model_name) - logger.debug("setting emane model(%s) config(%s)", model_name, config) - model_class = self.get_model(model_name) + config = self.get_model_config(node_id=node_id, model_name=model_name) + logging.debug("setting emane model(%s) config(%s)", model_name, config) + model_class = self.models[model_name] emane_net.setmodel(model_class, config) def get_nem_link( @@ -574,12 +493,12 @@ class EmaneManager: ) -> Optional[LinkData]: iface1 = self.get_iface(nem1) if not iface1: - logger.error("invalid nem: %s", nem1) + logging.error("invalid nem: %s", nem1) return None node1 = iface1.node iface2 = self.get_iface(nem2) if not iface2: - logger.error("invalid nem: %s", nem2) + logging.error("invalid nem: %s", nem2) return None node2 = iface2.node if iface1.net != iface2.net: @@ -595,56 +514,191 @@ class EmaneManager: color=color, ) - def start_daemon(self, iface: CoreInterface) -> None: + def buildeventservicexml(self) -> None: """ - Start emane daemon for a given nem/interface. + Build the libemaneeventservice.xml file if event service options + were changed in the global config. + """ + need_xml = False + default_values = self.emane_config.default_values() + for name in ["eventservicegroup", "eventservicedevice"]: + a = default_values[name] + b = self.get_config(name) + if a != b: + need_xml = True - :param iface: interface to start emane daemon for - :return: nothing + if not need_xml: + # reset to using default config + self.initeventservice() + return + + try: + group, port = self.get_config("eventservicegroup").split(":") + except ValueError: + logging.exception("invalid eventservicegroup in EMANE config") + return + + dev = self.get_config("eventservicedevice") + emanexml.create_event_service_xml(group, port, dev, self.session.session_dir) + self.session.distributed.execute( + lambda x: emanexml.create_event_service_xml( + group, port, dev, self.session.session_dir, x + ) + ) + + def start_daemon(self, node: CoreNodeBase) -> None: """ - node = iface.node - loglevel = str(DEFAULT_LOG_LEVEL) - cfgloglevel = self.session.options.get_int("emane_log_level", 2) - realtime = self.session.options.get_bool("emane_realtime", True) + Start one EMANE daemon per node having a radio. + Add a control network even if the user has not configured one. + """ + logging.info("starting emane daemons...") + loglevel = str(EmaneManager.DEFAULT_LOG_LEVEL) + cfgloglevel = self.session.options.get_config_int("emane_log_level") + realtime = self.session.options.get_config_bool("emane_realtime", default=True) if cfgloglevel: - logger.info("setting user-defined emane log level: %d", cfgloglevel) + logging.info("setting user-defined emane log level: %d", cfgloglevel) loglevel = str(cfgloglevel) emanecmd = f"emane -d -l {loglevel}" if realtime: emanecmd += " -r" if isinstance(node, CoreNode): + otagroup, _otaport = self.get_config("otamanagergroup").split(":") + otadev = self.get_config("otamanagerdevice") + otanetidx = self.session.get_control_net_index(otadev) + eventgroup, _eventport = self.get_config("eventservicegroup").split(":") + eventdev = self.get_config("eventservicedevice") + eventservicenetidx = self.session.get_control_net_index(eventdev) + + # control network not yet started here + self.session.add_remove_control_iface( + node, 0, remove=False, conf_required=False + ) + if otanetidx > 0: + logging.info("adding ota device ctrl%d", otanetidx) + self.session.add_remove_control_iface( + node, otanetidx, remove=False, conf_required=False + ) + if eventservicenetidx >= 0: + logging.info("adding event service device ctrl%d", eventservicenetidx) + self.session.add_remove_control_iface( + node, eventservicenetidx, remove=False, conf_required=False + ) + # multicast route is needed for OTA data + logging.info("OTA GROUP(%s) OTA DEV(%s)", otagroup, otadev) + node.node_net_client.create_route(otagroup, otadev) + # multicast route is also needed for event data if on control network + if eventservicenetidx >= 0 and eventgroup != otagroup: + node.node_net_client.create_route(eventgroup, eventdev) # start emane - log_file = node.directory / f"{iface.name}-emane.log" - platform_xml = node.directory / emanexml.platform_file_name(iface) + log_file = os.path.join(node.nodedir, f"{node.name}-emane.log") + platform_xml = os.path.join(node.nodedir, f"{node.name}-platform.xml") args = f"{emanecmd} -f {log_file} {platform_xml}" node.cmd(args) + logging.info("node(%s) emane daemon running: %s", node.name, args) else: - log_file = self.session.directory / f"{iface.name}-emane.log" - platform_xml = self.session.directory / emanexml.platform_file_name(iface) - args = f"{emanecmd} -f {log_file} {platform_xml}" - node.host_cmd(args, cwd=self.session.directory) + path = self.session.session_dir + log_file = os.path.join(path, f"{node.name}-emane.log") + platform_xml = os.path.join(path, f"{node.name}-platform.xml") + emanecmd += f" -f {log_file} {platform_xml}" + node.host_cmd(emanecmd, cwd=path) + logging.info("node(%s) host emane daemon running: %s", node.name, emanecmd) - def install_iface(self, iface: TunTap, config: dict[str, str]) -> None: + def install_iface(self, iface: CoreInterface) -> None: + emane_net = iface.net + if not isinstance(emane_net, EmaneNet): + raise CoreError( + f"emane interface not connected to emane net: {emane_net.name}" + ) + config = self.get_iface_config(emane_net, iface) external = config.get("external", "0") - if external == "0": + if isinstance(iface, TunTap) and external == "0": iface.set_ips() # at this point we register location handlers for generating # EMANE location events if self.genlocationevents(): - iface.poshook = self.set_nem_position + iface.poshook = emane_net.setnemposition iface.setposition() def doeventmonitor(self) -> bool: """ Returns boolean whether or not EMANE events will be monitored. """ - return self.session.options.get_bool("emane_event_monitor", False) + # this support must be explicitly turned on; by default, CORE will + # generate the EMANE events when nodes are moved + return self.session.options.get_config_bool("emane_event_monitor") def genlocationevents(self) -> bool: """ Returns boolean whether or not EMANE events will be generated. """ - return self.session.options.get_bool("emane_event_generate", True) + # By default, CORE generates EMANE location events when nodes + # are moved; this can be explicitly disabled in core.conf + tmp = self.session.options.get_config_bool("emane_event_generate") + if tmp is None: + tmp = not self.doeventmonitor() + return tmp + + def starteventmonitor(self) -> None: + """ + Start monitoring EMANE location events if configured to do so. + """ + logging.info("emane start event monitor") + if not self.doeventmonitor(): + return + if self.service is None: + logging.error( + "Warning: EMANE events will not be generated " + "because the emaneeventservice\n binding was " + "unable to load " + "(install the python-emaneeventservice bindings)" + ) + return + self.doeventloop = True + self.eventmonthread = threading.Thread( + target=self.eventmonitorloop, daemon=True + ) + self.eventmonthread.start() + + def stopeventmonitor(self) -> None: + """ + Stop monitoring EMANE location events. + """ + self.doeventloop = False + if self.service is not None: + self.service.breakloop() + # reset the service, otherwise nextEvent won"t work + self.initeventservice(shutdown=True) + + if self.eventmonthread is not None: + self.eventmonthread.join() + self.eventmonthread = None + + def eventmonitorloop(self) -> None: + """ + Thread target that monitors EMANE location events. + """ + if self.service is None: + return + logging.info( + "subscribing to EMANE location events. (%s)", + threading.currentThread().getName(), + ) + while self.doeventloop is True: + _uuid, _seq, events = self.service.nextEvent() + + # this occurs with 0.9.1 event service + if not self.doeventloop: + break + + for event in events: + nem, eid, data = event + if eid == LocationEvent.IDENTIFIER: + self.handlelocationevent(nem, eid, data) + + logging.info( + "unsubscribing from EMANE location events. (%s)", + threading.currentThread().getName(), + ) def handlelocationevent(self, rxnemid: int, eid: int, data: str) -> None: """ @@ -659,13 +713,14 @@ class EmaneManager: or "longitude" not in attrs or "altitude" not in attrs ): - logger.warning("dropped invalid location event") + logging.warning("dropped invalid location event") continue + # yaw,pitch,roll,azimuth,elevation,velocity are unhandled lat = attrs["latitude"] lon = attrs["longitude"] alt = attrs["altitude"] - logger.debug("emane location event: %s,%s,%s", lat, lon, alt) + logging.debug("emane location event: %s,%s,%s", lat, lon, alt) self.handlelocationeventtoxyz(txnemid, lat, lon, alt) def handlelocationeventtoxyz( @@ -679,7 +734,7 @@ class EmaneManager: # convert nemid to node number iface = self.get_iface(nemid) if iface is None: - logger.info("location event for unknown NEM %s", nemid) + logging.info("location event for unknown NEM %s", nemid) return False n = iface.node.id @@ -688,7 +743,7 @@ class EmaneManager: x = int(x) y = int(y) z = int(z) - logger.debug( + logging.debug( "location event NEM %s (%s, %s, %s) -> (%s, %s, %s)", nemid, lat, @@ -702,7 +757,7 @@ class EmaneManager: ybit_check = y.bit_length() > 16 or y < 0 zbit_check = z.bit_length() > 16 or z < 0 if any([xbit_check, ybit_check, zbit_check]): - logger.error( + logging.error( "Unable to build node location message, received lat/long/alt " "exceeds coordinate space: NEM %s (%d, %d, %d)", nemid, @@ -716,7 +771,7 @@ class EmaneManager: try: node = self.session.get_node(n, NodeBase) except CoreError: - logger.exception( + logging.exception( "location event NEM %s has no corresponding node %s", nemid, n ) return False @@ -727,6 +782,9 @@ class EmaneManager: self.session.broadcast_node(node) return True + def is_emane_net(self, net: Optional[CoreNetworkBase]) -> bool: + return isinstance(net, EmaneNet) + def emanerunning(self, node: CoreNode) -> bool: """ Return True if an EMANE process associated with the given node is running, @@ -752,19 +810,86 @@ class EmaneManager: event = PathlossEvent() event.append(nem1, forward=rx1) event.append(nem2, forward=rx2) - self.publish_event(nem1, event) - self.publish_event(nem2, event) + self.service.publish(nem1, event) + self.service.publish(nem2, event) - def publish_event( - self, - nem_id: int, - event: Union[PathlossEvent, CommEffectEvent, LocationEvent], - send_all: bool = False, - ) -> None: - service = self.nem_service.get(nem_id) - if not service: - logger.error("no service to publish event nem(%s)", nem_id) - return - if send_all: - nem_id = 0 - service.events.publish(nem_id, event) + +class EmaneGlobalModel: + """ + Global EMANE configuration options. + """ + + name: str = "emane" + bitmap: Optional[str] = None + + def __init__(self, session: "Session") -> None: + self.session: "Session" = session + self.core_config: List[Configuration] = [ + Configuration( + _id="platform_id_start", + _type=ConfigDataTypes.INT32, + default="1", + label="Starting Platform ID", + ), + Configuration( + _id="nem_id_start", + _type=ConfigDataTypes.INT32, + default="1", + label="Starting NEM ID", + ), + Configuration( + _id="link_enabled", + _type=ConfigDataTypes.BOOL, + default="1", + label="Enable Links?", + ), + Configuration( + _id="loss_threshold", + _type=ConfigDataTypes.INT32, + default="30", + label="Link Loss Threshold (%)", + ), + Configuration( + _id="link_interval", + _type=ConfigDataTypes.INT32, + default="1", + label="Link Check Interval (sec)", + ), + Configuration( + _id="link_timeout", + _type=ConfigDataTypes.INT32, + default="4", + label="Link Timeout (sec)", + ), + ] + self.emulator_config = None + self.parse_config() + + def parse_config(self) -> None: + emane_prefix = self.session.options.get_config( + "emane_prefix", default=DEFAULT_EMANE_PREFIX + ) + emulator_xml = os.path.join(emane_prefix, "share/emane/manifest/nemmanager.xml") + emulator_defaults = { + "eventservicedevice": DEFAULT_DEV, + "eventservicegroup": "224.1.2.8:45703", + "otamanagerdevice": DEFAULT_DEV, + "otamanagergroup": "224.1.2.8:45702", + } + self.emulator_config = emanemanifest.parse(emulator_xml, emulator_defaults) + + def configurations(self) -> List[Configuration]: + return self.emulator_config + self.core_config + + def config_groups(self) -> List[ConfigGroup]: + emulator_len = len(self.emulator_config) + config_len = len(self.configurations()) + return [ + ConfigGroup("Platform Attributes", 1, emulator_len), + ConfigGroup("CORE Configuration", emulator_len + 1, config_len), + ] + + def default_values(self) -> Dict[str, str]: + return OrderedDict( + [(config.id, config.default) for config in self.configurations()] + ) diff --git a/daemon/core/emane/emanemanifest.py b/daemon/core/emane/emanemanifest.py index ea2b05fd..41dc7beb 100644 --- a/daemon/core/emane/emanemanifest.py +++ b/daemon/core/emane/emanemanifest.py @@ -1,11 +1,9 @@ import logging -from pathlib import Path +from typing import Dict, List from core.config import Configuration from core.emulator.enumerations import ConfigDataTypes -logger = logging.getLogger(__name__) - manifest = None try: from emane.shell import manifest @@ -14,7 +12,7 @@ except ImportError: from emanesh import manifest except ImportError: manifest = None - logger.debug("compatible emane python bindings not installed") + logging.debug("compatible emane python bindings not installed") def _type_value(config_type: str) -> ConfigDataTypes: @@ -32,7 +30,7 @@ def _type_value(config_type: str) -> ConfigDataTypes: return ConfigDataTypes[config_type] -def _get_possible(config_type: str, config_regex: str) -> list[str]: +def _get_possible(config_type: str, config_regex: str) -> List[str]: """ Retrieve possible config value options based on emane regexes. @@ -50,7 +48,7 @@ def _get_possible(config_type: str, config_regex: str) -> list[str]: return [] -def _get_default(config_type_name: str, config_value: list[str]) -> str: +def _get_default(config_type_name: str, config_value: List[str]) -> str: """ Convert default configuration values to one used by core. @@ -73,10 +71,9 @@ def _get_default(config_type_name: str, config_value: list[str]) -> str: return config_default -def parse(manifest_path: Path, defaults: dict[str, str]) -> list[Configuration]: +def parse(manifest_path: str, defaults: Dict[str, str]) -> List[Configuration]: """ - Parses a valid emane manifest file and converts the provided configuration values - into ones used by core. + Parses a valid emane manifest file and converts the provided configuration values into ones used by core. :param manifest_path: absolute manifest file path :param defaults: used to override default values for configurations @@ -88,7 +85,7 @@ def parse(manifest_path: Path, defaults: dict[str, str]) -> list[Configuration]: return [] # load configuration file - manifest_file = manifest.Manifest(str(manifest_path)) + manifest_file = manifest.Manifest(manifest_path) manifest_configurations = manifest_file.getAllConfiguration() configurations = [] @@ -119,8 +116,8 @@ def parse(manifest_path: Path, defaults: dict[str, str]) -> list[Configuration]: config_descriptions = f"{config_descriptions} file" configuration = Configuration( - id=config_name, - type=config_type_value, + _id=config_name, + _type=config_type_value, default=config_default, options=possible, label=config_descriptions, diff --git a/daemon/core/emane/emanemodel.py b/daemon/core/emane/emanemodel.py index 4e31d632..755f07aa 100644 --- a/daemon/core/emane/emanemodel.py +++ b/daemon/core/emane/emanemodel.py @@ -2,21 +2,19 @@ Defines Emane Models used within CORE. """ import logging -from pathlib import Path -from typing import Optional +import os +from typing import Dict, List, Optional, Set -from core.config import ConfigBool, ConfigGroup, ConfigString, Configuration +from core.config import ConfigGroup, Configuration from core.emane import emanemanifest +from core.emane.nodes import EmaneNet from core.emulator.data import LinkOptions +from core.emulator.enumerations import ConfigDataTypes from core.errors import CoreError from core.location.mobility import WirelessModel from core.nodes.interface import CoreInterface from core.xml import emanexml -logger = logging.getLogger(__name__) -DEFAULT_DEV: str = "ctrl0" -MANIFEST_PATH: str = "share/emane/manifest" - class EmaneModel(WirelessModel): """ @@ -25,104 +23,79 @@ class EmaneModel(WirelessModel): configurable parameters. Helper functions also live here. """ - # default platform configuration settings - platform_controlport: str = "controlportendpoint" - platform_xml: str = "nemmanager.xml" - platform_defaults: dict[str, str] = { - "eventservicedevice": DEFAULT_DEV, - "eventservicegroup": "224.1.2.8:45703", - "otamanagerdevice": DEFAULT_DEV, - "otamanagergroup": "224.1.2.8:45702", - } - platform_config: list[Configuration] = [] - # default mac configuration settings mac_library: Optional[str] = None mac_xml: Optional[str] = None - mac_defaults: dict[str, str] = {} - mac_config: list[Configuration] = [] + mac_defaults: Dict[str, str] = {} + mac_config: List[Configuration] = [] # default phy configuration settings, using the universal model phy_library: Optional[str] = None phy_xml: str = "emanephy.xml" - phy_defaults: dict[str, str] = { + phy_defaults: Dict[str, str] = { "subid": "1", "propagationmodel": "2ray", "noisemode": "none", } - phy_config: list[Configuration] = [] + phy_config: List[Configuration] = [] # support for external configurations - external_config: list[Configuration] = [ - ConfigBool(id="external", default="0"), - ConfigString(id="platformendpoint", default="127.0.0.1:40001"), - ConfigString(id="transportendpoint", default="127.0.0.1:50002"), + external_config: List[Configuration] = [ + Configuration("external", ConfigDataTypes.BOOL, default="0"), + Configuration( + "platformendpoint", ConfigDataTypes.STRING, default="127.0.0.1:40001" + ), + Configuration( + "transportendpoint", ConfigDataTypes.STRING, default="127.0.0.1:50002" + ), ] - config_ignore: set[str] = set() + config_ignore: Set[str] = set() @classmethod - def load(cls, emane_prefix: Path) -> None: + def load(cls, emane_prefix: str) -> None: """ - Called after being loaded within the EmaneManager. Provides configured - emane_prefix for parsing xml files. + Called after being loaded within the EmaneManager. Provides configured emane_prefix for + parsing xml files. :param emane_prefix: configured emane prefix path :return: nothing """ - cls._load_platform_config(emane_prefix) + manifest_path = "share/emane/manifest" # load mac configuration - mac_xml_path = emane_prefix / MANIFEST_PATH / cls.mac_xml + mac_xml_path = os.path.join(emane_prefix, manifest_path, cls.mac_xml) cls.mac_config = emanemanifest.parse(mac_xml_path, cls.mac_defaults) + # load phy configuration - phy_xml_path = emane_prefix / MANIFEST_PATH / cls.phy_xml + phy_xml_path = os.path.join(emane_prefix, manifest_path, cls.phy_xml) cls.phy_config = emanemanifest.parse(phy_xml_path, cls.phy_defaults) @classmethod - def _load_platform_config(cls, emane_prefix: Path) -> None: - platform_xml_path = emane_prefix / MANIFEST_PATH / cls.platform_xml - cls.platform_config = emanemanifest.parse( - platform_xml_path, cls.platform_defaults - ) - # remove controlport configuration, since core will set this directly - controlport_index = None - for index, configuration in enumerate(cls.platform_config): - if configuration.id == cls.platform_controlport: - controlport_index = index - break - if controlport_index is not None: - cls.platform_config.pop(controlport_index) - - @classmethod - def configurations(cls) -> list[Configuration]: + def configurations(cls) -> List[Configuration]: """ Returns the combination all all configurations (mac, phy, and external). :return: all configurations """ - return ( - cls.platform_config + cls.mac_config + cls.phy_config + cls.external_config - ) + return cls.mac_config + cls.phy_config + cls.external_config @classmethod - def config_groups(cls) -> list[ConfigGroup]: + def config_groups(cls) -> List[ConfigGroup]: """ Returns the defined configuration groups. :return: list of configuration groups. """ - platform_len = len(cls.platform_config) - mac_len = len(cls.mac_config) + platform_len + mac_len = len(cls.mac_config) phy_len = len(cls.phy_config) + mac_len config_len = len(cls.configurations()) return [ - ConfigGroup("Platform Parameters", 1, platform_len), - ConfigGroup("MAC Parameters", platform_len + 1, mac_len), + ConfigGroup("MAC Parameters", 1, mac_len), ConfigGroup("PHY Parameters", mac_len + 1, phy_len), ConfigGroup("External Parameters", phy_len + 1, config_len), ] - def build_xml_files(self, config: dict[str, str], iface: CoreInterface) -> None: + def build_xml_files(self, config: Dict[str, str], iface: CoreInterface) -> None: """ Builds xml files for this emane model. Creates a nem.xml file that points to both mac.xml and phy.xml definitions. @@ -137,16 +110,15 @@ class EmaneModel(WirelessModel): emanexml.create_phy_xml(self, iface, config) emanexml.create_transport_xml(iface, config) - def post_startup(self, iface: CoreInterface) -> None: + def post_startup(self) -> None: """ Logic to execute after the emane manager is finished with startup. - :param iface: interface for post startup :return: nothing """ - logger.debug("emane model(%s) has no post setup tasks", self.name) + logging.debug("emane model(%s) has no post setup tasks", self.name) - def update(self, moved_ifaces: list[CoreInterface]) -> None: + def update(self, moved_ifaces: List[CoreInterface]) -> None: """ Invoked from MobilityModel when nodes are moved; this causes emane location events to be generated for the nodes in the moved @@ -156,9 +128,10 @@ class EmaneModel(WirelessModel): :return: nothing """ try: - self.session.emane.set_nem_positions(moved_ifaces) + emane_net = self.session.get_node(self.id, EmaneNet) + emane_net.setnempositions(moved_ifaces) except CoreError: - logger.exception("error during update") + logging.exception("error during update") def linkconfig( self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None @@ -171,4 +144,4 @@ class EmaneModel(WirelessModel): :param iface2: interface two :return: nothing """ - logger.warning("emane model(%s) does not support link config", self.name) + logging.warning("emane model(%s) does not support link config", self.name) diff --git a/daemon/core/emane/models/ieee80211abg.py b/daemon/core/emane/ieee80211abg.py similarity index 65% rename from daemon/core/emane/models/ieee80211abg.py rename to daemon/core/emane/ieee80211abg.py index f6b32264..0d58ec9e 100644 --- a/daemon/core/emane/models/ieee80211abg.py +++ b/daemon/core/emane/ieee80211abg.py @@ -1,7 +1,7 @@ """ ieee80211abg.py: EMANE IEEE 802.11abg model for CORE """ -from pathlib import Path +import os from core.emane import emanemodel @@ -15,8 +15,8 @@ class EmaneIeee80211abgModel(emanemodel.EmaneModel): mac_xml: str = "ieee80211abgmaclayer.xml" @classmethod - def load(cls, emane_prefix: Path) -> None: - cls.mac_defaults["pcrcurveuri"] = str( - emane_prefix / "share/emane/xml/models/mac/ieee80211abg/ieee80211pcr.xml" + def load(cls, emane_prefix: str) -> None: + cls.mac_defaults["pcrcurveuri"] = os.path.join( + emane_prefix, "share/emane/xml/models/mac/ieee80211abg/ieee80211pcr.xml" ) super().load(emane_prefix) diff --git a/daemon/core/emane/linkmonitor.py b/daemon/core/emane/linkmonitor.py index 1997e9f8..56473f62 100644 --- a/daemon/core/emane/linkmonitor.py +++ b/daemon/core/emane/linkmonitor.py @@ -2,17 +2,14 @@ import logging import sched import threading import time -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple from lxml import etree -from core.emane.nodes import EmaneNet from core.emulator.data import LinkData from core.emulator.enumerations import LinkTypes, MessageFlags from core.nodes.network import CtrlNet -logger = logging.getLogger(__name__) - try: from emane import shell except ImportError: @@ -20,11 +17,12 @@ except ImportError: from emanesh import shell except ImportError: shell = None - logger.debug("compatible emane python bindings not installed") + logging.debug("compatible emane python bindings not installed") if TYPE_CHECKING: from core.emane.emanemanager import EmaneManager +DEFAULT_PORT: int = 47_000 MAC_COMPONENT_INDEX: int = 1 EMANE_RFPIPE: str = "rfpipemaclayer" EMANE_80211: str = "ieee80211abgmaclayer" @@ -34,10 +32,10 @@ NEM_SELF: int = 65535 class LossTable: - def __init__(self, losses: dict[float, float]) -> None: - self.losses: dict[float, float] = losses - self.sinrs: list[float] = sorted(self.losses.keys()) - self.loss_lookup: dict[int, float] = {} + def __init__(self, losses: Dict[float, float]) -> None: + self.losses: Dict[float, float] = losses + self.sinrs: List[float] = sorted(self.losses.keys()) + self.loss_lookup: Dict[int, float] = {} for index, value in enumerate(self.sinrs): self.loss_lookup[index] = self.losses[value] self.mac_id: Optional[str] = None @@ -79,12 +77,12 @@ class EmaneLink: class EmaneClient: - def __init__(self, address: str, port: int) -> None: + def __init__(self, address: str) -> None: self.address: str = address self.client: shell.ControlPortClient = shell.ControlPortClient( - self.address, port + self.address, DEFAULT_PORT ) - self.nems: dict[int, LossTable] = {} + self.nems: Dict[int, LossTable] = {} self.setup() def setup(self) -> None: @@ -93,7 +91,7 @@ class EmaneClient: # get mac config mac_id, _, emane_model = components[MAC_COMPONENT_INDEX] mac_config = self.client.getConfiguration(mac_id) - logger.debug( + logging.debug( "address(%s) nem(%s) emane(%s)", self.address, nem_id, emane_model ) @@ -103,14 +101,14 @@ class EmaneClient: elif emane_model == EMANE_RFPIPE: loss_table = self.handle_rfpipe(mac_config) else: - logger.warning("unknown emane link model: %s", emane_model) + logging.warning("unknown emane link model: %s", emane_model) continue - logger.info("monitoring links nem(%s) model(%s)", nem_id, emane_model) + logging.info("monitoring links nem(%s) model(%s)", nem_id, emane_model) loss_table.mac_id = mac_id self.nems[nem_id] = loss_table def check_links( - self, links: dict[tuple[int, int], EmaneLink], loss_threshold: int + self, links: Dict[Tuple[int, int], EmaneLink], loss_threshold: int ) -> None: for from_nem, loss_table in self.nems.items(): tables = self.client.getStatisticTable(loss_table.mac_id, (SINR_TABLE,)) @@ -138,14 +136,14 @@ class EmaneClient: link = EmaneLink(from_nem, to_nem, sinr) links[link_key] = link - def handle_tdma(self, config: dict[str, tuple]): + def handle_tdma(self, config: Dict[str, Tuple]): pcr = config["pcrcurveuri"][0][0] - logger.debug("tdma pcr: %s", pcr) + logging.debug("tdma pcr: %s", pcr) - def handle_80211(self, config: dict[str, tuple]) -> LossTable: + def handle_80211(self, config: Dict[str, Tuple]) -> LossTable: unicastrate = config["unicastrate"][0][0] pcr = config["pcrcurveuri"][0][0] - logger.debug("80211 pcr: %s", pcr) + logging.debug("80211 pcr: %s", pcr) tree = etree.parse(pcr) root = tree.getroot() table = root.find("table") @@ -159,9 +157,9 @@ class EmaneClient: losses[sinr] = por return LossTable(losses) - def handle_rfpipe(self, config: dict[str, tuple]) -> LossTable: + def handle_rfpipe(self, config: Dict[str, Tuple]) -> LossTable: pcr = config["pcrcurveuri"][0][0] - logger.debug("rfpipe pcr: %s", pcr) + logging.debug("rfpipe pcr: %s", pcr) tree = etree.parse(pcr) root = tree.getroot() table = root.find("table") @@ -179,9 +177,9 @@ class EmaneClient: class EmaneLinkMonitor: def __init__(self, emane_manager: "EmaneManager") -> None: self.emane_manager: "EmaneManager" = emane_manager - self.clients: list[EmaneClient] = [] - self.links: dict[tuple[int, int], EmaneLink] = {} - self.complete_links: set[tuple[int, int]] = set() + self.clients: List[EmaneClient] = [] + self.links: Dict[Tuple[int, int], EmaneLink] = {} + self.complete_links: Set[Tuple[int, int]] = set() self.loss_threshold: Optional[int] = None self.link_interval: Optional[int] = None self.link_timeout: Optional[int] = None @@ -189,13 +187,12 @@ class EmaneLinkMonitor: self.running: bool = False def start(self) -> None: - options = self.emane_manager.session.options - self.loss_threshold = options.get_int("loss_threshold") - self.link_interval = options.get_int("link_interval") - self.link_timeout = options.get_int("link_timeout") + self.loss_threshold = int(self.emane_manager.get_config("loss_threshold")) + self.link_interval = int(self.emane_manager.get_config("link_interval")) + self.link_timeout = int(self.emane_manager.get_config("link_timeout")) self.initialize() if not self.clients: - logger.info("no valid emane models to monitor links") + logging.info("no valid emane models to monitor links") return self.scheduler = sched.scheduler() self.scheduler.enter(0, 0, self.check_links) @@ -205,28 +202,22 @@ class EmaneLinkMonitor: def initialize(self) -> None: addresses = self.get_addresses() - for address, port in addresses: - client = EmaneClient(address, port) + for address in addresses: + client = EmaneClient(address) if client.nems: self.clients.append(client) - def get_addresses(self) -> list[tuple[str, int]]: + def get_addresses(self) -> List[str]: addresses = [] nodes = self.emane_manager.getnodes() for node in nodes: - control = None - ports = [] for iface in node.get_ifaces(): if isinstance(iface.net, CtrlNet): ip4 = iface.get_ip4() if ip4: - control = str(ip4.ip) - if isinstance(iface.net, EmaneNet): - port = self.emane_manager.get_nem_port(iface) - ports.append(port) - if control: - for port in ports: - addresses.append((control, port)) + address = str(ip4.ip) + addresses.append(address) + break return addresses def check_links(self) -> None: @@ -237,7 +228,7 @@ class EmaneLinkMonitor: client.check_links(self.links, self.loss_threshold) except shell.ControlPortException: if self.running: - logger.exception("link monitor error") + logging.exception("link monitor error") # find new links current_links = set(self.links.keys()) @@ -273,25 +264,25 @@ class EmaneLinkMonitor: if self.running: self.scheduler.enter(self.link_interval, 0, self.check_links) - def get_complete_id(self, link_id: tuple[int, int]) -> tuple[int, int]: + def get_complete_id(self, link_id: Tuple[int, int]) -> Tuple[int, int]: value1, value2 = link_id if value1 < value2: return value1, value2 else: return value2, value1 - def is_complete_link(self, link_id: tuple[int, int]) -> bool: + def is_complete_link(self, link_id: Tuple[int, int]) -> bool: reverse_id = link_id[1], link_id[0] return link_id in self.links and reverse_id in self.links - def get_link_label(self, link_id: tuple[int, int]) -> str: + def get_link_label(self, link_id: Tuple[int, int]) -> str: source_id = tuple(sorted(link_id)) source_link = self.links[source_id] dest_id = link_id[::-1] dest_link = self.links[dest_id] return f"{source_link.sinr:.1f} / {dest_link.sinr:.1f}" - def send_link(self, message_type: MessageFlags, link_id: tuple[int, int]) -> None: + def send_link(self, message_type: MessageFlags, link_id: Tuple[int, int]) -> None: nem1, nem2 = link_id link = self.emane_manager.get_nem_link(nem1, nem2, message_type) if link: diff --git a/daemon/core/emane/modelmanager.py b/daemon/core/emane/modelmanager.py deleted file mode 100644 index 92dd5b8e..00000000 --- a/daemon/core/emane/modelmanager.py +++ /dev/null @@ -1,69 +0,0 @@ -import logging -import pkgutil -from pathlib import Path - -from core import utils -from core.emane import models as emane_models -from core.emane.emanemodel import EmaneModel -from core.errors import CoreError - -logger = logging.getLogger(__name__) - - -class EmaneModelManager: - models: dict[str, type[EmaneModel]] = {} - - @classmethod - def load_locals(cls, emane_prefix: Path) -> list[str]: - """ - Load local core emane models and make them available. - - :param emane_prefix: installed emane prefix - :return: list of errors encountered loading emane models - """ - errors = [] - for module_info in pkgutil.walk_packages( - emane_models.__path__, f"{emane_models.__name__}." - ): - models = utils.load_module(module_info.name, EmaneModel) - for model in models: - logger.debug("loading emane model: %s", model.name) - try: - model.load(emane_prefix) - cls.models[model.name] = model - except CoreError as e: - errors.append(model.name) - logger.debug("not loading emane model(%s): %s", model.name, e) - return errors - - @classmethod - def load(cls, path: Path, emane_prefix: Path) -> list[str]: - """ - Search and load custom emane models and make them available. - - :param path: path to search for custom emane models - :param emane_prefix: installed emane prefix - :return: list of errors encountered loading emane models - """ - subdirs = [x for x in path.iterdir() if x.is_dir()] - subdirs.append(path) - errors = [] - for subdir in subdirs: - logger.debug("loading emane models from: %s", subdir) - models = utils.load_classes(subdir, EmaneModel) - for model in models: - logger.debug("loading emane model: %s", model.name) - try: - model.load(emane_prefix) - cls.models[model.name] = model - except CoreError as e: - errors.append(model.name) - logger.debug("not loading emane model(%s): %s", model.name, e) - return errors - - @classmethod - def get(cls, name: str) -> type[EmaneModel]: - model = cls.models.get(name) - if model is None: - raise CoreError(f"emame model does not exist {name}") - return model diff --git a/daemon/core/emane/models/tdma.py b/daemon/core/emane/models/tdma.py deleted file mode 100644 index 100e960d..00000000 --- a/daemon/core/emane/models/tdma.py +++ /dev/null @@ -1,65 +0,0 @@ -""" -tdma.py: EMANE TDMA model bindings for CORE -""" - -import logging -from pathlib import Path - -from core import constants, utils -from core.config import ConfigString -from core.emane import emanemodel -from core.emane.nodes import EmaneNet -from core.nodes.interface import CoreInterface - -logger = logging.getLogger(__name__) - - -class EmaneTdmaModel(emanemodel.EmaneModel): - # model name - name: str = "emane_tdma" - - # mac configuration - mac_library: str = "tdmaeventschedulerradiomodel" - mac_xml: str = "tdmaeventschedulerradiomodel.xml" - - # add custom schedule options and ignore it when writing emane xml - schedule_name: str = "schedule" - default_schedule: Path = ( - constants.CORE_DATA_DIR / "examples" / "tdma" / "schedule.xml" - ) - config_ignore: set[str] = {schedule_name} - - @classmethod - def load(cls, emane_prefix: Path) -> None: - cls.mac_defaults["pcrcurveuri"] = str( - emane_prefix - / "share/emane/xml/models/mac/tdmaeventscheduler/tdmabasemodelpcr.xml" - ) - super().load(emane_prefix) - config_item = ConfigString( - id=cls.schedule_name, - default=str(cls.default_schedule), - label="TDMA schedule file (core)", - ) - cls.mac_config.insert(0, config_item) - - def post_startup(self, iface: CoreInterface) -> None: - # get configured schedule - emane_net = self.session.get_node(self.id, EmaneNet) - config = self.session.emane.get_iface_config(emane_net, iface) - schedule = Path(config[self.schedule_name]) - if not schedule.is_file(): - logger.error("ignoring invalid tdma schedule: %s", schedule) - return - # initiate tdma schedule - nem_id = self.session.emane.get_nem_id(iface) - if not nem_id: - logger.error("could not find nem for interface") - return - service = self.session.emane.nem_service.get(nem_id) - if service: - device = service.device - logger.info( - "setting up tdma schedule: schedule(%s) device(%s)", schedule, device - ) - utils.cmd(f"emaneevent-tdmaschedule -i {device} {schedule}") diff --git a/daemon/core/emane/nodes.py b/daemon/core/emane/nodes.py index ecf684d7..5791f46a 100644 --- a/daemon/core/emane/nodes.py +++ b/daemon/core/emane/nodes.py @@ -4,23 +4,28 @@ share the same MAC+PHY model. """ import logging -import time -from dataclasses import dataclass -from typing import TYPE_CHECKING, Callable, Optional, Union +from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Type from core.emulator.data import InterfaceData, LinkData, LinkOptions from core.emulator.distributed import DistributedServer -from core.emulator.enumerations import MessageFlags, RegisterTlvs -from core.errors import CoreCommandError, CoreError -from core.nodes.base import CoreNetworkBase, CoreNode, NodeOptions +from core.emulator.enumerations import ( + EventTypes, + LinkTypes, + MessageFlags, + NodeTypes, + RegisterTlvs, +) +from core.errors import CoreError +from core.nodes.base import CoreNetworkBase, CoreNode from core.nodes.interface import CoreInterface -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emane.emanemodel import EmaneModel from core.emulator.session import Session - from core.location.mobility import WayPointMobility + from core.location.mobility import WirelessModel, WayPointMobility + + OptionalEmaneModel = Optional[EmaneModel] + WirelessModelType = Type[WirelessModel] try: from emane.events import LocationEvent @@ -29,121 +34,7 @@ except ImportError: from emanesh.events import LocationEvent except ImportError: LocationEvent = None - logger.debug("compatible emane python bindings not installed") - - -class TunTap(CoreInterface): - """ - TUN/TAP virtual device in TAP mode - """ - - def __init__( - self, - _id: int, - name: str, - localname: str, - use_ovs: bool, - node: CoreNode = None, - server: "DistributedServer" = None, - ) -> None: - super().__init__(_id, name, localname, use_ovs, node=node, server=server) - self.node: CoreNode = node - - def startup(self) -> None: - """ - Startup logic for a tunnel tap. - - :return: nothing - """ - self.up = True - - def shutdown(self) -> None: - """ - Shutdown functionality for a tunnel tap. - - :return: nothing - """ - if not self.up: - return - self.up = False - - def waitfor( - self, func: Callable[[], int], attempts: int = 10, maxretrydelay: float = 0.25 - ) -> bool: - """ - Wait for func() to return zero with exponential backoff. - - :param func: function to wait for a result of zero - :param attempts: number of attempts to wait for a zero result - :param maxretrydelay: maximum retry delay - :return: True if wait succeeded, False otherwise - """ - delay = 0.01 - result = False - for i in range(1, attempts + 1): - r = func() - if r == 0: - result = True - break - msg = f"attempt {i} failed with nonzero exit status {r}" - if i < attempts + 1: - msg += ", retrying..." - logger.info(msg) - time.sleep(delay) - delay += delay - if delay > maxretrydelay: - delay = maxretrydelay - else: - msg += ", giving up" - logger.info(msg) - return result - - def nodedevexists(self) -> int: - """ - Checks if device exists. - - :return: 0 if device exists, 1 otherwise - """ - try: - self.node.node_net_client.device_show(self.name) - return 0 - except CoreCommandError: - return 1 - - def waitfordevicenode(self) -> None: - """ - Check for presence of a node device - tap device may not appear right away waits. - - :return: nothing - """ - logger.debug("waiting for device node: %s", self.name) - count = 0 - while True: - result = self.waitfor(self.nodedevexists) - if result: - break - should_retry = count < 5 - is_emane_running = self.node.session.emane.emanerunning(self.node) - if all([should_retry, is_emane_running]): - count += 1 - else: - raise RuntimeError("node device failed to exist") - - def set_ips(self) -> None: - """ - Set interface ip addresses. - - :return: nothing - """ - self.waitfordevicenode() - for ip in self.ips(): - self.node.node_net_client.create_address(self.name, str(ip)) - - -@dataclass -class EmaneOptions(NodeOptions): - emane_model: str = None - """name of emane model to associate an emane network to""" + logging.debug("compatible emane python bindings not installed") class EmaneNet(CoreNetworkBase): @@ -153,26 +44,22 @@ class EmaneNet(CoreNetworkBase): Emane controller object that exists in a session. """ + apitype: NodeTypes = NodeTypes.EMANE + linktype: LinkTypes = LinkTypes.WIRED + type: str = "wlan" + has_custom_iface: bool = True + def __init__( self, session: "Session", _id: int = None, name: str = None, server: DistributedServer = None, - options: EmaneOptions = None, ) -> None: - options = options or EmaneOptions() - super().__init__(session, _id, name, server, options) + super().__init__(session, _id, name, server) self.conf: str = "" + self.model: "OptionalEmaneModel" = None self.mobility: Optional[WayPointMobility] = None - model_class = self.session.emane.get_model(options.emane_model) - self.wireless_model: Optional["EmaneModel"] = model_class(self.session, self.id) - if self.session.is_running(): - self.session.emane.add_node(self) - - @classmethod - def create_options(cls) -> EmaneOptions: - return EmaneOptions() def linkconfig( self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None @@ -180,15 +67,18 @@ class EmaneNet(CoreNetworkBase): """ The CommEffect model supports link configuration. """ - if not self.wireless_model: + if not self.model: return - self.wireless_model.linkconfig(iface, options, iface2) + self.model.linkconfig(iface, options, iface2) + + def config(self, conf: str) -> None: + self.conf = conf def startup(self) -> None: - self.up = True + pass def shutdown(self) -> None: - self.up = False + pass def link(self, iface1: CoreInterface, iface2: CoreInterface) -> None: pass @@ -196,37 +86,93 @@ class EmaneNet(CoreNetworkBase): def unlink(self, iface1: CoreInterface, iface2: CoreInterface) -> None: pass - def updatemodel(self, config: dict[str, str]) -> None: - """ - Update configuration for the current model. + def linknet(self, net: "CoreNetworkBase") -> CoreInterface: + raise CoreError("emane networks cannot be linked to other networks") - :param config: configuration to update model with - :return: nothing - """ - if not self.wireless_model: + def updatemodel(self, config: Dict[str, str]) -> None: + if not self.model: raise CoreError(f"no model set to update for node({self.name})") - logger.info( - "node(%s) updating model(%s): %s", self.id, self.wireless_model.name, config + logging.info( + "node(%s) updating model(%s): %s", self.id, self.model.name, config ) - self.wireless_model.update_config(config) + self.model.update_config(config) - def setmodel( - self, - model: Union[type["EmaneModel"], type["WayPointMobility"]], - config: dict[str, str], - ) -> None: + def setmodel(self, model: "WirelessModelType", config: Dict[str, str]) -> None: """ set the EmaneModel associated with this node """ if model.config_type == RegisterTlvs.WIRELESS: - self.wireless_model = model(session=self.session, _id=self.id) - self.wireless_model.update_config(config) + # EmaneModel really uses values from ConfigurableManager + # when buildnemxml() is called, not during init() + self.model = model(session=self.session, _id=self.id) + self.model.update_config(config) elif model.config_type == RegisterTlvs.MOBILITY: self.mobility = model(session=self.session, _id=self.id) self.mobility.update_config(config) - def links(self, flags: MessageFlags = MessageFlags.NONE) -> list[LinkData]: - links = [] + def _nem_position( + self, iface: CoreInterface + ) -> Optional[Tuple[int, float, float, float]]: + """ + Creates nem position for emane event for a given interface. + + :param iface: interface to get nem emane position for + :return: nem position tuple, None otherwise + """ + nem_id = self.session.emane.get_nem_id(iface) + ifname = iface.localname + if nem_id is None: + logging.info("nemid for %s is unknown", ifname) + return + node = iface.node + x, y, z = node.getposition() + lat, lon, alt = self.session.location.getgeo(x, y, z) + if node.position.alt is not None: + alt = node.position.alt + node.position.set_geo(lon, lat, alt) + # altitude must be an integer or warning is printed + alt = int(round(alt)) + return nem_id, lon, lat, alt + + def setnemposition(self, iface: CoreInterface) -> None: + """ + Publish a NEM location change event using the EMANE event service. + + :param iface: interface to set nem position for + """ + if self.session.emane.service is None: + logging.info("position service not available") + return + position = self._nem_position(iface) + if position: + nemid, lon, lat, alt = position + event = LocationEvent() + event.append(nemid, latitude=lat, longitude=lon, altitude=alt) + self.session.emane.service.publish(0, event) + + def setnempositions(self, moved_ifaces: List[CoreInterface]) -> None: + """ + Several NEMs have moved, from e.g. a WaypointMobilityModel + calculation. Generate an EMANE Location Event having several + entries for each interface that has moved. + """ + if len(moved_ifaces) == 0: + return + + if self.session.emane.service is None: + logging.info("position service not available") + return + + event = LocationEvent() + for iface in moved_ifaces: + position = self._nem_position(iface) + if position: + nemid, lon, lat, alt = position + event.append(nemid, latitude=lat, longitude=lon, altitude=alt) + self.session.emane.service.publish(0, event) + + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: + links = super().links(flags) emane_manager = self.session.emane # gather current emane links nem_ids = set() @@ -247,44 +193,22 @@ class EmaneNet(CoreNetworkBase): # ignore incomplete links if (nem2, nem1) not in emane_links: continue - link = emane_manager.get_nem_link(nem1, nem2, flags) + link = emane_manager.get_nem_link(nem1, nem2) if link: links.append(link) return links - def create_tuntap(self, node: CoreNode, iface_data: InterfaceData) -> CoreInterface: - """ - Create a tuntap interface for the provided node. - - :param node: node to create tuntap interface for - :param iface_data: interface data to create interface with - :return: created tuntap interface - """ - with node.lock: - if iface_data.id is not None and iface_data.id in node.ifaces: - raise CoreError( - f"node({self.id}) interface({iface_data.id}) already exists" - ) - iface_id = ( - iface_data.id if iface_data.id is not None else node.next_iface_id() - ) - name = iface_data.name if iface_data.name is not None else f"eth{iface_id}" - session_id = self.session.short_session_id() - localname = f"tap{node.id}.{iface_id}.{session_id}" - iface = TunTap(iface_id, name, localname, self.session.use_ovs(), node=node) - if iface_data.mac: - iface.set_mac(iface_data.mac) - for ip in iface_data.get_ips(): - iface.add_ip(ip) - node.ifaces[iface_id] = iface - self.attach(iface) - if self.up: - iface.startup() - if self.session.is_running(): + def custom_iface(self, node: CoreNode, iface_data: InterfaceData) -> CoreInterface: + # TUN/TAP is not ready for addressing yet; the device may + # take some time to appear, and installing it into a + # namespace after it has been bound removes addressing; + # save addresses with the interface now + iface_id = node.newtuntap(iface_data.id, iface_data.name) + node.attachnet(iface_id, self) + iface = node.get_iface(iface_id) + iface.set_mac(iface_data.mac) + for ip in iface_data.get_ips(): + iface.add_ip(ip) + if self.session.state == EventTypes.RUNTIME_STATE: self.session.emane.start_iface(self, iface) return iface - - def adopt_iface(self, iface: CoreInterface, name: str) -> None: - raise CoreError( - f"emane network({self.name}) do not support adopting interfaces" - ) diff --git a/daemon/core/emane/models/rfpipe.py b/daemon/core/emane/rfpipe.py similarity index 63% rename from daemon/core/emane/models/rfpipe.py rename to daemon/core/emane/rfpipe.py index 7dace8c7..068ef800 100644 --- a/daemon/core/emane/models/rfpipe.py +++ b/daemon/core/emane/rfpipe.py @@ -1,7 +1,7 @@ """ rfpipe.py: EMANE RF-PIPE model for CORE """ -from pathlib import Path +import os from core.emane import emanemodel @@ -15,8 +15,8 @@ class EmaneRfPipeModel(emanemodel.EmaneModel): mac_xml: str = "rfpipemaclayer.xml" @classmethod - def load(cls, emane_prefix: Path) -> None: - cls.mac_defaults["pcrcurveuri"] = str( - emane_prefix / "share/emane/xml/models/mac/rfpipe/rfpipepcr.xml" + def load(cls, emane_prefix: str) -> None: + cls.mac_defaults["pcrcurveuri"] = os.path.join( + emane_prefix, "share/emane/xml/models/mac/rfpipe/rfpipepcr.xml" ) super().load(emane_prefix) diff --git a/daemon/core/emane/tdma.py b/daemon/core/emane/tdma.py new file mode 100644 index 00000000..ee80f3d7 --- /dev/null +++ b/daemon/core/emane/tdma.py @@ -0,0 +1,67 @@ +""" +tdma.py: EMANE TDMA model bindings for CORE +""" + +import logging +import os +from typing import Set + +from core import constants, utils +from core.config import Configuration +from core.emane import emanemodel +from core.emulator.enumerations import ConfigDataTypes + + +class EmaneTdmaModel(emanemodel.EmaneModel): + # model name + name: str = "emane_tdma" + + # mac configuration + mac_library: str = "tdmaeventschedulerradiomodel" + mac_xml: str = "tdmaeventschedulerradiomodel.xml" + + # add custom schedule options and ignore it when writing emane xml + schedule_name: str = "schedule" + default_schedule: str = os.path.join( + constants.CORE_DATA_DIR, "examples", "tdma", "schedule.xml" + ) + config_ignore: Set[str] = {schedule_name} + + @classmethod + def load(cls, emane_prefix: str) -> None: + cls.mac_defaults["pcrcurveuri"] = os.path.join( + emane_prefix, + "share/emane/xml/models/mac/tdmaeventscheduler/tdmabasemodelpcr.xml", + ) + super().load(emane_prefix) + cls.mac_config.insert( + 0, + Configuration( + _id=cls.schedule_name, + _type=ConfigDataTypes.STRING, + default=cls.default_schedule, + label="TDMA schedule file (core)", + ), + ) + + def post_startup(self) -> None: + """ + Logic to execute after the emane manager is finished with startup. + + :return: nothing + """ + # get configured schedule + config = self.session.emane.get_configs(node_id=self.id, config_type=self.name) + if not config: + return + schedule = config[self.schedule_name] + + # get the set event device + event_device = self.session.emane.event_device + + # initiate tdma schedule + logging.info( + "setting up tdma schedule: schedule(%s) device(%s)", schedule, event_device + ) + args = f"emaneevent-tdmaschedule -i {event_device} {schedule}" + utils.cmd(args) diff --git a/daemon/core/emulator/broadcast.py b/daemon/core/emulator/broadcast.py deleted file mode 100644 index bf56f99d..00000000 --- a/daemon/core/emulator/broadcast.py +++ /dev/null @@ -1,67 +0,0 @@ -from collections.abc import Callable -from typing import TypeVar, Union - -from core.emulator.data import ( - ConfigData, - EventData, - ExceptionData, - FileData, - LinkData, - NodeData, -) -from core.errors import CoreError - -T = TypeVar( - "T", bound=Union[EventData, ExceptionData, NodeData, LinkData, FileData, ConfigData] -) - - -class BroadcastManager: - def __init__(self) -> None: - """ - Creates a BroadcastManager instance. - """ - self.handlers: dict[type[T], set[Callable[[T], None]]] = {} - - def send(self, data: T) -> None: - """ - Retrieve handlers for data, and run all current handlers. - - :param data: data to provide to handlers - :return: nothing - """ - handlers = self.handlers.get(type(data), set()) - for handler in handlers: - handler(data) - - def add_handler(self, data_type: type[T], handler: Callable[[T], None]) -> None: - """ - Add a handler for a given data type. - - :param data_type: type of data to add handler for - :param handler: handler to add - :return: nothing - """ - handlers = self.handlers.setdefault(data_type, set()) - if handler in handlers: - raise CoreError( - f"cannot add data({data_type}) handler({repr(handler)}), " - f"already exists" - ) - handlers.add(handler) - - def remove_handler(self, data_type: type[T], handler: Callable[[T], None]) -> None: - """ - Remove a handler for a given data type. - - :param data_type: type of data to remove handler for - :param handler: handler to remove - :return: nothing - """ - handlers = self.handlers.get(data_type, set()) - if handler not in handlers: - raise CoreError( - f"cannot remove data({data_type}) handler({repr(handler)}), " - f"does not exist" - ) - handlers.remove(handler) diff --git a/daemon/core/emulator/controlnets.py b/daemon/core/emulator/controlnets.py deleted file mode 100644 index 27b00367..00000000 --- a/daemon/core/emulator/controlnets.py +++ /dev/null @@ -1,239 +0,0 @@ -import logging -from typing import TYPE_CHECKING, Optional - -from core import utils -from core.emulator.data import InterfaceData -from core.errors import CoreError -from core.nodes.base import CoreNode -from core.nodes.interface import DEFAULT_MTU -from core.nodes.network import CtrlNet - -logger = logging.getLogger(__name__) - -if TYPE_CHECKING: - from core.emulator.session import Session - -CTRL_NET_ID: int = 9001 -ETC_HOSTS_PATH: str = "/etc/hosts" - - -class ControlNetManager: - def __init__(self, session: "Session") -> None: - self.session: "Session" = session - self.etc_hosts_header: str = f"CORE session {self.session.id} host entries" - - def _etc_hosts_enabled(self) -> bool: - """ - Determines if /etc/hosts should be configured. - - :return: True if /etc/hosts should be configured, False otherwise - """ - return self.session.options.get_bool("update_etc_hosts", False) - - def _get_server_ifaces( - self, - ) -> tuple[None, Optional[str], Optional[str], Optional[str]]: - """ - Retrieve control net server interfaces. - - :return: control net server interfaces - """ - d0 = self.session.options.get("controlnetif0") - if d0: - logger.error("controlnet0 cannot be assigned with a host interface") - d1 = self.session.options.get("controlnetif1") - d2 = self.session.options.get("controlnetif2") - d3 = self.session.options.get("controlnetif3") - return None, d1, d2, d3 - - def _get_prefixes( - self, - ) -> tuple[Optional[str], Optional[str], Optional[str], Optional[str]]: - """ - Retrieve control net prefixes. - - :return: control net prefixes - """ - p = self.session.options.get("controlnet") - p0 = self.session.options.get("controlnet0") - p1 = self.session.options.get("controlnet1") - p2 = self.session.options.get("controlnet2") - p3 = self.session.options.get("controlnet3") - if not p0 and p: - p0 = p - return p0, p1, p2, p3 - - def update_etc_hosts(self) -> None: - """ - Add the IP addresses of control interfaces to the /etc/hosts file. - - :return: nothing - """ - if not self._etc_hosts_enabled(): - return - control_net = self.get_control_net(0) - entries = "" - for iface in control_net.get_ifaces(): - name = iface.node.name - for ip in iface.ips(): - entries += f"{ip.ip} {name}\n" - logger.info("adding entries to /etc/hosts") - utils.file_munge(ETC_HOSTS_PATH, self.etc_hosts_header, entries) - - def clear_etc_hosts(self) -> None: - """ - Clear IP addresses of control interfaces from the /etc/hosts file. - - :return: nothing - """ - if not self._etc_hosts_enabled(): - return - logger.info("removing /etc/hosts file entries") - utils.file_demunge(ETC_HOSTS_PATH, self.etc_hosts_header) - - def get_control_net_index(self, dev: str) -> int: - """ - Retrieve control net index. - - :param dev: device to get control net index for - :return: control net index, -1 otherwise - """ - if dev[0:4] == "ctrl" and int(dev[4]) in (0, 1, 2, 3): - index = int(dev[4]) - if index == 0: - return index - if index < 4 and self._get_prefixes()[index] is not None: - return index - return -1 - - def get_control_net(self, index: int) -> Optional[CtrlNet]: - """ - Retrieve a control net based on index. - - :param index: control net index - :return: control net when available, None otherwise - """ - try: - return self.session.get_node(CTRL_NET_ID + index, CtrlNet) - except CoreError: - return None - - def add_control_net( - self, index: int, conf_required: bool = True - ) -> Optional[CtrlNet]: - """ - Create a control network bridge as necessary. The conf_reqd flag, - when False, causes a control network bridge to be added even if - one has not been configured. - - :param index: network index to add - :param conf_required: flag to check if conf is required - :return: control net node - """ - logger.info( - "checking to add control net index(%s) conf_required(%s)", - index, - conf_required, - ) - # check for valid index - if not (0 <= index <= 3): - raise CoreError(f"invalid control net index({index})") - # return any existing control net bridge - control_net = self.get_control_net(index) - if control_net: - logger.info("control net index(%s) already exists", index) - return control_net - # retrieve prefix for current index - index_prefix = self._get_prefixes()[index] - if not index_prefix: - if conf_required: - return None - else: - index_prefix = CtrlNet.DEFAULT_PREFIX_LIST[index] - # retrieve valid prefix from old style values - prefixes = index_prefix.split() - if len(prefixes) > 1: - # a list of per-host prefixes is provided - try: - prefix = prefixes[0].split(":", 1)[1] - except IndexError: - prefix = prefixes[0] - else: - prefix = prefixes[0] - # use the updown script for control net 0 only - updown_script = None - if index == 0: - updown_script = self.session.options.get("controlnet_updown_script") - # build a new controlnet bridge - _id = CTRL_NET_ID + index - server_iface = self._get_server_ifaces()[index] - logger.info( - "adding controlnet(%s) prefix(%s) updown(%s) server interface(%s)", - _id, - prefix, - updown_script, - server_iface, - ) - options = CtrlNet.create_options() - options.prefix = prefix - options.updown_script = updown_script - options.serverintf = server_iface - control_net = self.session.create_node(CtrlNet, False, _id, options=options) - control_net.brname = f"ctrl{index}.{self.session.short_session_id()}" - control_net.startup() - return control_net - - def remove_control_net(self, index: int) -> None: - """ - Removes control net. - - :param index: index of control net to remove - :return: nothing - """ - control_net = self.get_control_net(index) - if control_net: - logger.info("removing control net index(%s)", index) - self.session.delete_node(control_net.id) - - def add_control_iface(self, node: CoreNode, index: int) -> None: - """ - Adds a control net interface to a node. - - :param node: node to add control net interface to - :param index: index of control net to add interface to - :return: nothing - :raises CoreError: if control net doesn't exist, interface already exists, - or there is an error creating the interface - """ - control_net = self.get_control_net(index) - if not control_net: - raise CoreError(f"control net index({index}) does not exist") - iface_id = control_net.CTRLIF_IDX_BASE + index - if node.ifaces.get(iface_id): - raise CoreError(f"control iface({iface_id}) already exists") - try: - logger.info( - "node(%s) adding control net index(%s) interface(%s)", - node.name, - index, - iface_id, - ) - ip4 = control_net.prefix[node.id] - ip4_mask = control_net.prefix.prefixlen - iface_data = InterfaceData( - id=iface_id, - name=f"ctrl{index}", - mac=utils.random_mac(), - ip4=ip4, - ip4_mask=ip4_mask, - mtu=DEFAULT_MTU, - ) - iface = node.create_iface(iface_data) - control_net.attach(iface) - iface.control = True - except ValueError: - raise CoreError( - f"error adding control net interface to node({node.id}), " - f"invalid control net prefix({control_net.prefix}), " - "a longer prefix length may be required" - ) diff --git a/daemon/core/emulator/coreemu.py b/daemon/core/emulator/coreemu.py index 574002e6..885fb431 100644 --- a/daemon/core/emulator/coreemu.py +++ b/daemon/core/emulator/coreemu.py @@ -1,17 +1,35 @@ +import atexit import logging import os -from pathlib import Path +import signal +import sys +from typing import Dict, List, Type -from core import utils +import core.services +from core import configservices, utils from core.configservice.manager import ConfigServiceManager -from core.emane.modelmanager import EmaneModelManager from core.emulator.session import Session from core.executables import get_requirements from core.services.coreservices import ServiceManager -logger = logging.getLogger(__name__) -DEFAULT_EMANE_PREFIX: str = "/usr" +def signal_handler(signal_number: int, _) -> None: + """ + Handle signals and force an exit with cleanup. + + :param signal_number: signal number + :param _: ignored + :return: nothing + """ + logging.info("caught signal: %s", signal_number) + sys.exit(signal_number) + + +signal.signal(signal.SIGHUP, signal_handler) +signal.signal(signal.SIGINT, signal_handler) +signal.signal(signal.SIGTERM, signal_handler) +signal.signal(signal.SIGUSR1, signal_handler) +signal.signal(signal.SIGUSR2, signal_handler) class CoreEmu: @@ -19,7 +37,7 @@ class CoreEmu: Provides logic for creating and configuring CORE sessions and the nodes within them. """ - def __init__(self, config: dict[str, str] = None) -> None: + def __init__(self, config: Dict[str, str] = None) -> None: """ Create a CoreEmu object. @@ -29,24 +47,31 @@ class CoreEmu: os.umask(0) # configuration - config = config if config else {} - self.config: dict[str, str] = config + if config is None: + config = {} + self.config: Dict[str, str] = config # session management - self.sessions: dict[int, Session] = {} + self.sessions: Dict[int, Session] = {} # load services - self.service_errors: list[str] = [] - self.service_manager: ConfigServiceManager = ConfigServiceManager() - self._load_services() + self.service_errors: List[str] = [] + self.load_services() - # check and load emane - self.has_emane: bool = False - self._load_emane() + # config services + self.service_manager: ConfigServiceManager = ConfigServiceManager() + config_services_path = os.path.abspath(os.path.dirname(configservices.__file__)) + self.service_manager.load(config_services_path) + custom_dir = self.config.get("custom_config_services_dir") + if custom_dir: + self.service_manager.load(custom_dir) # check executables exist on path self._validate_env() + # catch exit event + atexit.register(self.shutdown) + def _validate_env(self) -> None: """ Validates executables CORE depends on exist on path. @@ -58,54 +83,23 @@ class CoreEmu: for requirement in get_requirements(use_ovs): utils.which(requirement, required=True) - def _load_services(self) -> None: + def load_services(self) -> None: """ Loads default and custom services for use within CORE. :return: nothing """ # load default services - self.service_errors = ServiceManager.load_locals() + self.service_errors = core.services.load() + # load custom services service_paths = self.config.get("custom_services_dir") - logger.debug("custom service paths: %s", service_paths) - if service_paths is not None: + logging.debug("custom service paths: %s", service_paths) + if service_paths: for service_path in service_paths.split(","): - service_path = Path(service_path.strip()) + service_path = service_path.strip() custom_service_errors = ServiceManager.add_services(service_path) self.service_errors.extend(custom_service_errors) - # load default config services - self.service_manager.load_locals() - # load custom config services - custom_dir = self.config.get("custom_config_services_dir") - if custom_dir is not None: - custom_dir = Path(custom_dir) - self.service_manager.load(custom_dir) - - def _load_emane(self) -> None: - """ - Check if emane is installed and load models. - - :return: nothing - """ - # check for emane - path = utils.which("emane", required=False) - self.has_emane = path is not None - if not self.has_emane: - logger.info("emane is not installed, emane functionality disabled") - return - # get version - emane_version = utils.cmd("emane --version") - logger.info("using emane: %s", emane_version) - emane_prefix = self.config.get("emane_prefix", DEFAULT_EMANE_PREFIX) - emane_prefix = Path(emane_prefix) - EmaneModelManager.load_locals(emane_prefix) - # load custom models - custom_path = self.config.get("emane_models_dir") - if custom_path is not None: - logger.info("loading custom emane models: %s", custom_path) - custom_path = Path(custom_path) - EmaneModelManager.load(custom_path, emane_prefix) def shutdown(self) -> None: """ @@ -113,12 +107,14 @@ class CoreEmu: :return: nothing """ - logger.info("shutting down all sessions") - while self.sessions: - _, session = self.sessions.popitem() + logging.info("shutting down all sessions") + sessions = self.sessions.copy() + self.sessions.clear() + for _id in sessions: + session = sessions[_id] session.shutdown() - def create_session(self, _id: int = None, _cls: type[Session] = Session) -> Session: + def create_session(self, _id: int = None, _cls: Type[Session] = Session) -> Session: """ Create a new CORE session. @@ -132,7 +128,7 @@ class CoreEmu: _id += 1 session = _cls(_id, config=self.config) session.service_manager = self.service_manager - logger.info("created session: %s", _id) + logging.info("created session: %s", _id) self.sessions[_id] = session return session @@ -143,14 +139,14 @@ class CoreEmu: :param _id: session id to delete :return: True if deleted, False otherwise """ - logger.info("deleting session: %s", _id) + logging.info("deleting session: %s", _id) session = self.sessions.pop(_id, None) result = False if session: - logger.info("shutting session down: %s", _id) + logging.info("shutting session down: %s", _id) session.data_collect() session.shutdown() result = True else: - logger.error("session to delete did not exist: %s", _id) + logging.error("session to delete did not exist: %s", _id) return result diff --git a/daemon/core/emulator/data.py b/daemon/core/emulator/data.py index 7d3dc8dc..68a92eea 100644 --- a/daemon/core/emulator/data.py +++ b/daemon/core/emulator/data.py @@ -2,7 +2,7 @@ CORE data objects. """ from dataclasses import dataclass, field -from typing import TYPE_CHECKING, Any, Optional +from typing import TYPE_CHECKING, List, Optional, Tuple import netaddr @@ -24,7 +24,7 @@ class ConfigData: node: int = None object: str = None type: int = None - data_types: tuple[int] = None + data_types: Tuple[int] = None data_values: str = None captions: str = None bitmap: str = None @@ -81,8 +81,8 @@ class NodeOptions: model: Optional[str] = "PC" canvas: int = None icon: str = None - services: list[str] = field(default_factory=list) - config_services: list[str] = field(default_factory=list) + services: List[str] = field(default_factory=list) + config_services: List[str] = field(default_factory=list) x: float = None y: float = None lat: float = None @@ -91,11 +91,6 @@ class NodeOptions: server: str = None image: str = None emane: str = None - legacy: bool = False - # src, dst - binds: list[tuple[str, str]] = field(default_factory=list) - # src, dst, unique, delete - volumes: list[tuple[str, str, bool, bool]] = field(default_factory=list) def set_position(self, x: float, y: float) -> None: """ @@ -146,9 +141,8 @@ class InterfaceData: ip4_mask: int = None ip6: str = None ip6_mask: int = None - mtu: int = None - def get_ips(self) -> list[str]: + def get_ips(self) -> List[str]: """ Returns a list of ip4 and ip6 addresses when present. @@ -180,67 +174,6 @@ class LinkOptions: key: int = None buffer: int = None - def update(self, options: "LinkOptions") -> bool: - """ - Updates current options with values from other options. - - :param options: options to update with - :return: True if any value has changed, False otherwise - """ - changed = False - if options.delay is not None and 0 <= options.delay != self.delay: - self.delay = options.delay - changed = True - if options.bandwidth is not None and 0 <= options.bandwidth != self.bandwidth: - self.bandwidth = options.bandwidth - changed = True - if options.loss is not None and 0 <= options.loss != self.loss: - self.loss = options.loss - changed = True - if options.dup is not None and 0 <= options.dup != self.dup: - self.dup = options.dup - changed = True - if options.jitter is not None and 0 <= options.jitter != self.jitter: - self.jitter = options.jitter - changed = True - if options.buffer is not None and 0 <= options.buffer != self.buffer: - self.buffer = options.buffer - changed = True - return changed - - def is_clear(self) -> bool: - """ - Checks if the current option values represent a clear state. - - :return: True if the current values should clear, False otherwise - """ - clear = self.delay is None or self.delay <= 0 - clear &= self.jitter is None or self.jitter <= 0 - clear &= self.loss is None or self.loss <= 0 - clear &= self.dup is None or self.dup <= 0 - clear &= self.bandwidth is None or self.bandwidth <= 0 - clear &= self.buffer is None or self.buffer <= 0 - return clear - - def __eq__(self, other: Any) -> bool: - """ - Custom logic to check if this link options is equivalent to another. - - :param other: other object to check - :return: True if they are both link options with the same values, - False otherwise - """ - if not isinstance(other, LinkOptions): - return False - return ( - self.delay == other.delay - and self.jitter == other.jitter - and self.loss == other.loss - and self.dup == other.dup - and self.bandwidth == other.bandwidth - and self.buffer == other.buffer - ) - @dataclass class LinkData: diff --git a/daemon/core/emulator/distributed.py b/daemon/core/emulator/distributed.py index 1c0d3c92..a5e1009f 100644 --- a/daemon/core/emulator/distributed.py +++ b/daemon/core/emulator/distributed.py @@ -6,23 +6,19 @@ import logging import os import threading from collections import OrderedDict -from pathlib import Path from tempfile import NamedTemporaryFile -from typing import TYPE_CHECKING, Callable +from typing import TYPE_CHECKING, Callable, Dict, Tuple import netaddr from fabric import Connection from invoke import UnexpectedExit from core import utils -from core.emulator.links import CoreLink from core.errors import CoreCommandError, CoreError from core.executables import get_requirements from core.nodes.interface import GreTap from core.nodes.network import CoreNetwork, CtrlNet -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emulator.session import Session @@ -48,7 +44,7 @@ class DistributedServer: self.lock: threading.Lock = threading.Lock() def remote_cmd( - self, cmd: str, env: dict[str, str] = None, cwd: str = None, wait: bool = True + self, cmd: str, env: Dict[str, str] = None, cwd: str = None, wait: bool = True ) -> str: """ Run command remotely using server connection. @@ -65,7 +61,7 @@ class DistributedServer: replace_env = env is not None if not wait: cmd += " &" - logger.debug( + logging.debug( "remote cmd server(%s) cwd(%s) wait(%s): %s", self.host, cwd, wait, cmd ) try: @@ -83,31 +79,31 @@ class DistributedServer: stdout, stderr = e.streams_for_display() raise CoreCommandError(e.result.exited, cmd, stdout, stderr) - def remote_put(self, src_path: Path, dst_path: Path) -> None: + def remote_put(self, source: str, destination: str) -> None: """ Push file to remote server. - :param src_path: source file to push - :param dst_path: destination file location + :param source: source file to push + :param destination: destination file location :return: nothing """ with self.lock: - self.conn.put(str(src_path), str(dst_path)) + self.conn.put(source, destination) - def remote_put_temp(self, dst_path: Path, data: str) -> None: + def remote_put_temp(self, destination: str, data: str) -> None: """ Remote push file contents to a remote server, using a temp file as an intermediate step. - :param dst_path: file destination for data + :param destination: file destination for data :param data: data to store in remote file :return: nothing """ with self.lock: temp = NamedTemporaryFile(delete=False) - temp.write(data.encode()) + temp.write(data.encode("utf-8")) temp.close() - self.conn.put(temp.name, str(dst_path)) + self.conn.put(temp.name, destination) os.unlink(temp.name) @@ -123,9 +119,11 @@ class DistributedController: :param session: session """ self.session: "Session" = session - self.servers: dict[str, DistributedServer] = OrderedDict() - self.tunnels: dict[int, tuple[GreTap, GreTap]] = {} - self.address: str = self.session.options.get("distributed_address") + self.servers: Dict[str, DistributedServer] = OrderedDict() + self.tunnels: Dict[int, Tuple[GreTap, GreTap]] = {} + self.address: str = self.session.options.get_config( + "distributed_address", default=None + ) def add_server(self, name: str, host: str) -> None: """ @@ -146,7 +144,7 @@ class DistributedController: f"command({requirement})" ) self.servers[name] = server - cmd = f"mkdir -p {self.session.directory}" + cmd = f"mkdir -p {self.session.session_dir}" server.remote_cmd(cmd) def execute(self, func: Callable[[DistributedServer], None]) -> None: @@ -172,55 +170,41 @@ class DistributedController: tunnels = self.tunnels[key] for tunnel in tunnels: tunnel.shutdown() + # remove all remote session directories for name in self.servers: server = self.servers[name] - cmd = f"rm -rf {self.session.directory}" + cmd = f"rm -rf {self.session.session_dir}" server.remote_cmd(cmd) + # clear tunnels self.tunnels.clear() def start(self) -> None: """ - Start distributed network tunnels for control networks. + Start distributed network tunnels. :return: nothing """ - mtu = self.session.options.get_int("mtu") - for node in self.session.nodes.values(): - if not isinstance(node, CtrlNet) or node.serverintf is not None: + for node_id in self.session.nodes: + node = self.session.nodes[node_id] + if not isinstance(node, CoreNetwork): + continue + if isinstance(node, CtrlNet) and node.serverintf is not None: continue for name in self.servers: server = self.servers[name] - self.create_gre_tunnel(node, server, mtu, True) - - def create_gre_tunnels(self, core_link: CoreLink) -> None: - """ - Creates gre tunnels for a core link with a ptp network connection. - - :param core_link: core link to create gre tunnel for - :return: nothing - """ - if not self.servers: - return - if not core_link.ptp: - raise CoreError( - "attempted to create gre tunnel for core link without a ptp network" - ) - mtu = self.session.options.get_int("mtu") - for server in self.servers.values(): - self.create_gre_tunnel(core_link.ptp, server, mtu, True) + self.create_gre_tunnel(node, server) def create_gre_tunnel( - self, node: CoreNetwork, server: DistributedServer, mtu: int, start: bool - ) -> tuple[GreTap, GreTap]: + self, node: CoreNetwork, server: DistributedServer + ) -> Tuple[GreTap, GreTap]: """ Create gre tunnel using a pair of gre taps between the local and remote server. :param node: node to create gre tunnel for - :param server: server to create tunnel for - :param mtu: mtu for gre taps - :param start: True to start gre taps, False otherwise + :param server: server to create + tunnel for :return: local and remote gre taps created for tunnel """ host = server.host @@ -228,20 +212,23 @@ class DistributedController: tunnel = self.tunnels.get(key) if tunnel is not None: return tunnel + # local to server - logger.info("local tunnel node(%s) to remote(%s) key(%s)", node.name, host, key) - local_tap = GreTap(self.session, host, key=key, mtu=mtu) - if start: - local_tap.startup() - local_tap.net_client.set_iface_master(node.brname, local_tap.localname) + logging.info( + "local tunnel node(%s) to remote(%s) key(%s)", node.name, host, key + ) + local_tap = GreTap(session=self.session, remoteip=host, key=key) + local_tap.net_client.set_iface_master(node.brname, local_tap.localname) + # server to local - logger.info( + logging.info( "remote tunnel node(%s) to local(%s) key(%s)", node.name, self.address, key ) - remote_tap = GreTap(self.session, self.address, key=key, server=server, mtu=mtu) - if start: - remote_tap.startup() - remote_tap.net_client.set_iface_master(node.brname, remote_tap.localname) + remote_tap = GreTap( + session=self.session, remoteip=self.address, key=key, server=server + ) + remote_tap.net_client.set_iface_master(node.brname, remote_tap.localname) + # save tunnels for shutdown tunnel = (local_tap, remote_tap) self.tunnels[key] = tunnel @@ -257,7 +244,7 @@ class DistributedController: :param node2_id: node two id :return: tunnel key for the node pair """ - logger.debug("creating tunnel key for: %s, %s", node1_id, node2_id) + logging.debug("creating tunnel key for: %s, %s", node1_id, node2_id) key = ( (self.session.id << 16) ^ utils.hashkey(node1_id) diff --git a/daemon/core/emulator/enumerations.py b/daemon/core/emulator/enumerations.py index 96fb919b..83e7bffd 100644 --- a/daemon/core/emulator/enumerations.py +++ b/daemon/core/emulator/enumerations.py @@ -20,17 +20,6 @@ class MessageFlags(Enum): TTY = 0x40 -class ConfigFlags(Enum): - """ - Configuration flags. - """ - - NONE = 0x00 - REQUEST = 0x01 - UPDATE = 0x02 - RESET = 0x03 - - class NodeTypes(Enum): """ Node types. @@ -49,8 +38,6 @@ class NodeTypes(Enum): CONTROL_NET = 13 DOCKER = 15 LXC = 16 - WIRELESS = 17 - PODMAN = 18 class LinkTypes(Enum): diff --git a/daemon/core/emulator/hooks.py b/daemon/core/emulator/hooks.py deleted file mode 100644 index ffeeafeb..00000000 --- a/daemon/core/emulator/hooks.py +++ /dev/null @@ -1,145 +0,0 @@ -import logging -import subprocess -from collections.abc import Callable -from pathlib import Path - -from core.emulator.enumerations import EventTypes -from core.errors import CoreError - -logger = logging.getLogger(__name__) - - -class HookManager: - """ - Provides functionality for managing and running script/callback hooks. - """ - - def __init__(self) -> None: - """ - Create a HookManager instance. - """ - self.script_hooks: dict[EventTypes, dict[str, str]] = {} - self.callback_hooks: dict[EventTypes, list[Callable[[], None]]] = {} - - def reset(self) -> None: - """ - Clear all current hooks. - - :return: nothing - """ - self.script_hooks.clear() - self.callback_hooks.clear() - - def add_script_hook(self, state: EventTypes, file_name: str, data: str) -> None: - """ - Add a hook script to run for a given state. - - :param state: state to run hook on - :param file_name: hook file name - :param data: file data - :return: nothing - """ - logger.info("setting state hook: %s - %s", state, file_name) - state_hooks = self.script_hooks.setdefault(state, {}) - if file_name in state_hooks: - raise CoreError( - f"adding duplicate state({state.name}) hook script({file_name})" - ) - state_hooks[file_name] = data - - def delete_script_hook(self, state: EventTypes, file_name: str) -> None: - """ - Delete a script hook from a given state. - - :param state: state to delete script hook from - :param file_name: name of script to delete - :return: nothing - """ - state_hooks = self.script_hooks.get(state, {}) - if file_name not in state_hooks: - raise CoreError( - f"deleting state({state.name}) hook script({file_name}) " - "that does not exist" - ) - del state_hooks[file_name] - - def add_callback_hook( - self, state: EventTypes, hook: Callable[[EventTypes], None] - ) -> None: - """ - Add a hook callback to run for a state. - - :param state: state to add hook for - :param hook: callback to run - :return: nothing - """ - hooks = self.callback_hooks.setdefault(state, []) - if hook in hooks: - name = getattr(callable, "__name__", repr(hook)) - raise CoreError( - f"adding duplicate state({state.name}) hook callback({name})" - ) - hooks.append(hook) - - def delete_callback_hook( - self, state: EventTypes, hook: Callable[[EventTypes], None] - ) -> None: - """ - Delete a state hook. - - :param state: state to delete hook for - :param hook: hook to delete - :return: nothing - """ - hooks = self.callback_hooks.get(state, []) - if hook not in hooks: - name = getattr(callable, "__name__", repr(hook)) - raise CoreError( - f"deleting state({state.name}) hook callback({name}) " - "that does not exist" - ) - hooks.remove(hook) - - def run_hooks( - self, state: EventTypes, directory: Path, env: dict[str, str] - ) -> None: - """ - Run all hooks for the current state. - - :param state: state to run hooks for - :param directory: directory to run script hooks within - :param env: environment to run script hooks with - :return: nothing - """ - for state_hooks in self.script_hooks.get(state, {}): - for file_name, data in state_hooks.items(): - logger.info("running hook %s", file_name) - file_path = directory / file_name - log_path = directory / f"{file_name}.log" - try: - with file_path.open("w") as f: - f.write(data) - with log_path.open("w") as f: - args = ["/bin/sh", file_name] - subprocess.check_call( - args, - stdout=f, - stderr=subprocess.STDOUT, - close_fds=True, - cwd=directory, - env=env, - ) - except (OSError, subprocess.CalledProcessError) as e: - raise CoreError( - f"failure running state({state.name}) " - f"hook script({file_name}): {e}" - ) - for hook in self.callback_hooks.get(state, []): - try: - hook() - except Exception as e: - name = getattr(callable, "__name__", repr(hook)) - raise CoreError( - f"failure running state({state.name}) " - f"hook callback({name}): {e}" - ) diff --git a/daemon/core/emulator/links.py b/daemon/core/emulator/links.py deleted file mode 100644 index 5df29d90..00000000 --- a/daemon/core/emulator/links.py +++ /dev/null @@ -1,257 +0,0 @@ -""" -Provides functionality for maintaining information about known links -for a session. -""" - -import logging -from collections.abc import ValuesView -from dataclasses import dataclass -from typing import Optional - -from core.emulator.data import LinkData, LinkOptions -from core.emulator.enumerations import LinkTypes, MessageFlags -from core.errors import CoreError -from core.nodes.base import NodeBase -from core.nodes.interface import CoreInterface -from core.nodes.network import PtpNet - -logger = logging.getLogger(__name__) -LinkKeyType = tuple[int, Optional[int], int, Optional[int]] - - -def create_key( - node1: NodeBase, - iface1: Optional[CoreInterface], - node2: NodeBase, - iface2: Optional[CoreInterface], -) -> LinkKeyType: - """ - Creates a unique key for tracking links. - - :param node1: first node in link - :param iface1: node1 interface - :param node2: second node in link - :param iface2: node2 interface - :return: link key - """ - iface1_id = iface1.id if iface1 else None - iface2_id = iface2.id if iface2 else None - if node1.id < node2.id: - return node1.id, iface1_id, node2.id, iface2_id - else: - return node2.id, iface2_id, node1.id, iface1_id - - -@dataclass -class CoreLink: - """ - Provides a core link data structure. - """ - - node1: NodeBase - iface1: Optional[CoreInterface] - node2: NodeBase - iface2: Optional[CoreInterface] - ptp: PtpNet = None - label: str = None - color: str = None - - def key(self) -> LinkKeyType: - """ - Retrieve the key for this link. - - :return: link key - """ - return create_key(self.node1, self.iface1, self.node2, self.iface2) - - def is_unidirectional(self) -> bool: - """ - Checks if this link is considered unidirectional, due to current - iface configurations. - - :return: True if unidirectional, False otherwise - """ - unidirectional = False - if self.iface1 and self.iface2: - unidirectional = self.iface1.options != self.iface2.options - return unidirectional - - def options(self) -> LinkOptions: - """ - Retrieve the options for this link. - - :return: options for this link - """ - if self.is_unidirectional(): - options = self.iface1.options - else: - if self.iface1: - options = self.iface1.options - else: - options = self.iface2.options - return options - - def get_data(self, message_type: MessageFlags, source: str = None) -> LinkData: - """ - Create link data for this link. - - :param message_type: link data message type - :param source: source for this data - :return: link data - """ - iface1_data = self.iface1.get_data() if self.iface1 else None - iface2_data = self.iface2.get_data() if self.iface2 else None - return LinkData( - message_type=message_type, - type=LinkTypes.WIRED, - node1_id=self.node1.id, - node2_id=self.node2.id, - iface1=iface1_data, - iface2=iface2_data, - options=self.options(), - label=self.label, - color=self.color, - source=source, - ) - - def get_data_unidirectional(self, source: str = None) -> LinkData: - """ - Create other unidirectional link data. - - :param source: source for this data - :return: unidirectional link data - """ - iface1_data = self.iface1.get_data() if self.iface1 else None - iface2_data = self.iface2.get_data() if self.iface2 else None - return LinkData( - message_type=MessageFlags.NONE, - type=LinkTypes.WIRED, - node1_id=self.node2.id, - node2_id=self.node1.id, - iface1=iface2_data, - iface2=iface1_data, - options=self.iface2.options, - label=self.label, - color=self.color, - source=source, - ) - - -class LinkManager: - """ - Provides core link management. - """ - - def __init__(self) -> None: - """ - Create a LinkManager instance. - """ - self._links: dict[LinkKeyType, CoreLink] = {} - self._node_links: dict[int, dict[LinkKeyType, CoreLink]] = {} - - def add(self, core_link: CoreLink) -> None: - """ - Add a core link to be tracked. - - :param core_link: link to track - :return: nothing - """ - node1, iface1 = core_link.node1, core_link.iface1 - node2, iface2 = core_link.node2, core_link.iface2 - if core_link.key() in self._links: - raise CoreError( - f"node1({node1.name}) iface1({iface1.id}) " - f"node2({node2.name}) iface2({iface2.id}) link already exists" - ) - logger.info( - "adding link from node(%s:%s) to node(%s:%s)", - node1.name, - iface1.name if iface1 else None, - node2.name, - iface2.name if iface2 else None, - ) - self._links[core_link.key()] = core_link - node1_links = self._node_links.setdefault(node1.id, {}) - node1_links[core_link.key()] = core_link - node2_links = self._node_links.setdefault(node2.id, {}) - node2_links[core_link.key()] = core_link - - def delete( - self, - node1: NodeBase, - iface1: Optional[CoreInterface], - node2: NodeBase, - iface2: Optional[CoreInterface], - ) -> CoreLink: - """ - Remove a link from being tracked. - - :param node1: first node in link - :param iface1: node1 interface - :param node2: second node in link - :param iface2: node2 interface - :return: removed core link - """ - key = create_key(node1, iface1, node2, iface2) - if key not in self._links: - raise CoreError( - f"node1({node1.name}) iface1({iface1.id}) " - f"node2({node2.name}) iface2({iface2.id}) is not linked" - ) - logger.info( - "deleting link from node(%s:%s) to node(%s:%s)", - node1.name, - iface1.name if iface1 else None, - node2.name, - iface2.name if iface2 else None, - ) - node1_links = self._node_links[node1.id] - node1_links.pop(key) - node2_links = self._node_links[node2.id] - node2_links.pop(key) - return self._links.pop(key) - - def reset(self) -> None: - """ - Resets and clears all tracking information. - - :return: nothing - """ - self._links.clear() - self._node_links.clear() - - def get_link( - self, - node1: NodeBase, - iface1: Optional[CoreInterface], - node2: NodeBase, - iface2: Optional[CoreInterface], - ) -> Optional[CoreLink]: - """ - Retrieve a link for provided values. - - :param node1: first node in link - :param iface1: interface for node1 - :param node2: second node in link - :param iface2: interface for node2 - :return: core link if present, None otherwise - """ - key = create_key(node1, iface1, node2, iface2) - return self._links.get(key) - - def links(self) -> ValuesView[CoreLink]: - """ - Retrieve all known links - - :return: iterator for all known links - """ - return self._links.values() - - def node_links(self, node: NodeBase) -> ValuesView[CoreLink]: - """ - Retrieve all links for a given node. - - :param node: node to get links for - :return: node links - """ - return self._node_links.get(node.id, {}).values() diff --git a/daemon/core/emulator/session.py b/daemon/core/emulator/session.py index 5a6557ee..9264ce84 100644 --- a/daemon/core/emulator/session.py +++ b/daemon/core/emulator/session.py @@ -4,7 +4,6 @@ that manages a CORE session. """ import logging -import math import os import pwd import shutil @@ -14,7 +13,7 @@ import tempfile import threading import time from pathlib import Path -from typing import Callable, Optional, TypeVar, Union +from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Type, TypeVar, Union from core import constants, utils from core.configservice.manager import ConfigServiceManager @@ -29,23 +28,24 @@ from core.emulator.data import ( LinkData, LinkOptions, NodeData, + NodeOptions, ) from core.emulator.distributed import DistributedController from core.emulator.enumerations import ( EventTypes, ExceptionLevels, + LinkTypes, MessageFlags, NodeTypes, ) -from core.emulator.links import CoreLink, LinkManager from core.emulator.sessionconfig import SessionConfig from core.errors import CoreError from core.location.event import EventLoop from core.location.geo import GeoLocation from core.location.mobility import BasicRangeModel, MobilityManager -from core.nodes.base import CoreNode, CoreNodeBase, NodeBase, NodeOptions, Position +from core.nodes.base import CoreNetworkBase, CoreNode, CoreNodeBase, NodeBase from core.nodes.docker import DockerNode -from core.nodes.interface import DEFAULT_MTU, CoreInterface +from core.nodes.interface import CoreInterface from core.nodes.lxd import LxcNode from core.nodes.network import ( CtrlNet, @@ -57,17 +57,13 @@ from core.nodes.network import ( WlanNode, ) from core.nodes.physical import PhysicalNode, Rj45Node -from core.nodes.podman import PodmanNode -from core.nodes.wireless import WirelessNode from core.plugins.sdt import Sdt from core.services.coreservices import CoreServices from core.xml import corexml, corexmldeployment from core.xml.corexml import CoreXmlReader, CoreXmlWriter -logger = logging.getLogger(__name__) - # maps for converting from API call node type values to classes and vice versa -NODES: dict[NodeTypes, type[NodeBase]] = { +NODES: Dict[NodeTypes, Type[NodeBase]] = { NodeTypes.DEFAULT: CoreNode, NodeTypes.PHYSICAL: PhysicalNode, NodeTypes.SWITCH: SwitchNode, @@ -81,18 +77,12 @@ NODES: dict[NodeTypes, type[NodeBase]] = { NodeTypes.CONTROL_NET: CtrlNet, NodeTypes.DOCKER: DockerNode, NodeTypes.LXC: LxcNode, - NodeTypes.WIRELESS: WirelessNode, - NodeTypes.PODMAN: PodmanNode, } -NODES_TYPE: dict[type[NodeBase], NodeTypes] = {NODES[x]: x for x in NODES} +NODES_TYPE: Dict[Type[NodeBase], NodeTypes] = {NODES[x]: x for x in NODES} +CONTAINER_NODES: Set[Type[NodeBase]] = {DockerNode, LxcNode} CTRL_NET_ID: int = 9001 -LINK_COLORS: list[str] = ["green", "blue", "orange", "purple", "turquoise"] +LINK_COLORS: List[str] = ["green", "blue", "orange", "purple", "turquoise"] NT: TypeVar = TypeVar("NT", bound=NodeBase) -WIRELESS_TYPE: tuple[type[WlanNode], type[EmaneNet], type[WirelessNode]] = ( - WlanNode, - EmaneNet, - WirelessNode, -) class Session: @@ -101,7 +91,7 @@ class Session: """ def __init__( - self, _id: int, config: dict[str, str] = None, mkdir: bool = True + self, _id: int, config: Dict[str, str] = None, mkdir: bool = True ) -> None: """ Create a Session instance. @@ -113,42 +103,47 @@ class Session: self.id: int = _id # define and create session directory when desired - self.directory: Path = Path(tempfile.gettempdir()) / f"pycore.{self.id}" + self.session_dir: str = os.path.join(tempfile.gettempdir(), f"pycore.{self.id}") if mkdir: - self.directory.mkdir() + os.mkdir(self.session_dir) self.name: Optional[str] = None - self.file_path: Optional[Path] = None - self.thumbnail: Optional[Path] = None + self.file_name: Optional[str] = None + self.thumbnail: Optional[str] = None self.user: Optional[str] = None self.event_loop: EventLoop = EventLoop() - self.link_colors: dict[int, str] = {} + self.link_colors: Dict[int, str] = {} # dict of nodes: all nodes and nets - self.nodes: dict[int, NodeBase] = {} - self.nodes_lock: threading.Lock = threading.Lock() - self.link_manager: LinkManager = LinkManager() + self.nodes: Dict[int, NodeBase] = {} + self.nodes_lock = threading.Lock() # states and hooks handlers self.state: EventTypes = EventTypes.DEFINITION_STATE self.state_time: float = time.monotonic() - self.hooks: dict[EventTypes, list[tuple[str, str]]] = {} - self.state_hooks: dict[EventTypes, list[Callable[[EventTypes], None]]] = {} + self.hooks: Dict[EventTypes, List[Tuple[str, str]]] = {} + self.state_hooks: Dict[EventTypes, List[Callable[[EventTypes], None]]] = {} self.add_state_hook( state=EventTypes.RUNTIME_STATE, hook=self.runtime_state_hook ) # handlers for broadcasting information - self.event_handlers: list[Callable[[EventData], None]] = [] - self.exception_handlers: list[Callable[[ExceptionData], None]] = [] - self.node_handlers: list[Callable[[NodeData], None]] = [] - self.link_handlers: list[Callable[[LinkData], None]] = [] - self.file_handlers: list[Callable[[FileData], None]] = [] - self.config_handlers: list[Callable[[ConfigData], None]] = [] + self.event_handlers: List[Callable[[EventData], None]] = [] + self.exception_handlers: List[Callable[[ExceptionData], None]] = [] + self.node_handlers: List[Callable[[NodeData], None]] = [] + self.link_handlers: List[Callable[[LinkData], None]] = [] + self.file_handlers: List[Callable[[FileData], None]] = [] + self.config_handlers: List[Callable[[ConfigData], None]] = [] + self.shutdown_handlers: List[Callable[[Session], None]] = [] # session options/metadata - self.options: SessionConfig = SessionConfig(config) - self.metadata: dict[str, str] = {} + self.options: SessionConfig = SessionConfig() + if not config: + config = {} + for key in config: + value = config[key] + self.options.set_config(key, value) + self.metadata: Dict[str, str] = {} # distributed support and logic self.distributed: DistributedController = DistributedController(self) @@ -164,7 +159,7 @@ class Session: self.service_manager: Optional[ConfigServiceManager] = None @classmethod - def get_node_class(cls, _type: NodeTypes) -> type[NodeBase]: + def get_node_class(cls, _type: NodeTypes) -> Type[NodeBase]: """ Retrieve the class for a given node type. @@ -177,7 +172,7 @@ class Session: return node_class @classmethod - def get_node_type(cls, _class: type[NodeBase]) -> NodeTypes: + def get_node_type(cls, _class: Type[NodeBase]) -> NodeTypes: """ Retrieve node type for a given node class. @@ -190,47 +185,42 @@ class Session: raise CoreError(f"invalid node class: {_class}") return node_type - def use_ovs(self) -> bool: - return self.options.get_int("ovs") == 1 - - def linked( - self, node1_id: int, node2_id: int, iface1_id: int, iface2_id: int, linked: bool + def _link_wireless( + self, node1: CoreNodeBase, node2: CoreNodeBase, connect: bool ) -> None: """ - Links or unlinks wired core link interfaces from being connected to the same - bridge. + Objects to deal with when connecting/disconnecting wireless links. - :param node1_id: first node in link - :param node2_id: second node in link - :param iface1_id: node1 interface - :param iface2_id: node2 interface - :param linked: True if interfaces should be connected, False for disconnected + :param node1: node one for wireless link + :param node2: node two for wireless link + :param connect: link interfaces if True, unlink otherwise :return: nothing + :raises core.CoreError: when objects to link is less than 2, or no common + networks are found """ - node1 = self.get_node(node1_id, NodeBase) - node2 = self.get_node(node2_id, NodeBase) - logger.info( - "link node(%s):interface(%s) node(%s):interface(%s) linked(%s)", + logging.info( + "handling wireless linking node1(%s) node2(%s): %s", node1.name, - iface1_id, node2.name, - iface2_id, - linked, + connect, ) - iface1 = node1.get_iface(iface1_id) - iface2 = node2.get_iface(iface2_id) - core_link = self.link_manager.get_link(node1, iface1, node2, iface2) - if not core_link: - raise CoreError( - f"there is no link for node({node1.name}):interface({iface1_id}) " - f"node({node2.name}):interface({iface2_id})" - ) - if linked: - core_link.ptp.attach(iface1) - core_link.ptp.attach(iface2) - else: - core_link.ptp.detach(iface1) - core_link.ptp.detach(iface2) + common_networks = node1.commonnets(node1) + if not common_networks: + raise CoreError("no common network found for wireless link/unlink") + for common_network, iface1, iface2 in common_networks: + if not isinstance(common_network, (WlanNode, EmaneNet)): + logging.info( + "skipping common network that is not wireless/emane: %s", + common_network, + ) + continue + if connect: + common_network.link(iface1, iface2) + else: + common_network.unlink(iface1, iface2) + + def use_ovs(self) -> bool: + return self.options.get_config("ovs") == "1" def add_link( self, @@ -239,7 +229,8 @@ class Session: iface1_data: InterfaceData = None, iface2_data: InterfaceData = None, options: LinkOptions = None, - ) -> tuple[Optional[CoreInterface], Optional[CoreInterface]]: + link_type: LinkTypes = LinkTypes.WIRED, + ) -> Tuple[CoreInterface, CoreInterface]: """ Add a link between nodes. @@ -251,129 +242,83 @@ class Session: data, defaults to none :param options: data for creating link, defaults to no options + :param link_type: type of link to add :return: tuple of created core interfaces, depending on link """ - options = options if options else LinkOptions() - # set mtu - mtu = self.options.get_int("mtu") or DEFAULT_MTU - if iface1_data: - iface1_data.mtu = mtu - if iface2_data: - iface2_data.mtu = mtu + if not options: + options = LinkOptions() node1 = self.get_node(node1_id, NodeBase) node2 = self.get_node(node2_id, NodeBase) - # check for invalid linking - if ( - isinstance(node1, WIRELESS_TYPE) - and isinstance(node2, WIRELESS_TYPE) - or isinstance(node1, WIRELESS_TYPE) - and not isinstance(node2, CoreNodeBase) - or not isinstance(node1, CoreNodeBase) - and isinstance(node2, WIRELESS_TYPE) - ): - raise CoreError(f"cannot link node({type(node1)}) node({type(node2)})") - # custom links iface1 = None iface2 = None - if isinstance(node1, (WlanNode, WirelessNode)): - iface2 = self._add_wlan_link(node2, iface2_data, node1) - elif isinstance(node2, (WlanNode, WirelessNode)): - iface1 = self._add_wlan_link(node1, iface1_data, node2) - elif isinstance(node1, EmaneNet) and isinstance(node2, CoreNode): - iface2 = self._add_emane_link(node2, iface2_data, node1) - elif isinstance(node2, EmaneNet) and isinstance(node1, CoreNode): - iface1 = self._add_emane_link(node1, iface1_data, node2) + + # wireless link + if link_type == LinkTypes.WIRELESS: + if isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNodeBase): + self._link_wireless(node1, node2, connect=True) + else: + raise CoreError( + f"cannot wireless link node1({type(node1)}) node2({type(node2)})" + ) + # wired link else: - iface1, iface2 = self._add_wired_link( - node1, node2, iface1_data, iface2_data, options - ) - # configure tunnel nodes - key = options.key - if isinstance(node1, TunnelNode): - logger.info("setting tunnel key for: %s", node1.name) - node1.setkey(key, iface1_data) - if isinstance(node2, TunnelNode): - logger.info("setting tunnel key for: %s", node2.name) - node2.setkey(key, iface2_data) + # peer to peer link + if isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNodeBase): + logging.info("linking ptp: %s - %s", node1.name, node2.name) + start = self.state.should_start() + ptp = self.create_node(PtpNet, start) + iface1 = node1.new_iface(ptp, iface1_data) + iface2 = node2.new_iface(ptp, iface2_data) + ptp.linkconfig(iface1, options) + if not options.unidirectional: + ptp.linkconfig(iface2, options) + # link node to net + elif isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNetworkBase): + iface1 = node1.new_iface(node2, iface1_data) + if not isinstance(node2, (EmaneNet, WlanNode)): + node2.linkconfig(iface1, options) + # link net to node + elif isinstance(node2, CoreNodeBase) and isinstance(node1, CoreNetworkBase): + iface2 = node2.new_iface(node1, iface2_data) + wireless_net = isinstance(node1, (EmaneNet, WlanNode)) + if not options.unidirectional and not wireless_net: + node1.linkconfig(iface2, options) + # network to network + elif isinstance(node1, CoreNetworkBase) and isinstance( + node2, CoreNetworkBase + ): + logging.info( + "linking network to network: %s - %s", node1.name, node2.name + ) + iface1 = node1.linknet(node2) + node1.linkconfig(iface1, options) + if not options.unidirectional: + iface1.swapparams("_params_up") + node2.linkconfig(iface1, options) + iface1.swapparams("_params_up") + else: + raise CoreError( + f"cannot link node1({type(node1)}) node2({type(node2)})" + ) + + # configure tunnel nodes + key = options.key + if isinstance(node1, TunnelNode): + logging.info("setting tunnel key for: %s", node1.name) + node1.setkey(key, iface1_data) + if isinstance(node2, TunnelNode): + logging.info("setting tunnel key for: %s", node2.name) + node2.setkey(key, iface2_data) self.sdt.add_link(node1_id, node2_id) return iface1, iface2 - def _add_wlan_link( - self, - node: NodeBase, - iface_data: InterfaceData, - net: Union[WlanNode, WirelessNode], - ) -> CoreInterface: - """ - Create a wlan link. - - :param node: node to link to wlan network - :param iface_data: data to create interface with - :param net: wlan network to link to - :return: interface created for node - """ - # create interface - iface = node.create_iface(iface_data) - # attach to wlan - net.attach(iface) - # track link - core_link = CoreLink(node, iface, net, None) - self.link_manager.add(core_link) - return iface - - def _add_emane_link( - self, node: CoreNode, iface_data: InterfaceData, net: EmaneNet - ) -> CoreInterface: - """ - Create am emane link. - - :param node: node to link to emane network - :param iface_data: data to create interface with - :param net: emane network to link to - :return: interface created for node - """ - # create iface tuntap - iface = net.create_tuntap(node, iface_data) - # track link - core_link = CoreLink(node, iface, net, None) - self.link_manager.add(core_link) - return iface - - def _add_wired_link( - self, - node1: NodeBase, - node2: NodeBase, - iface1_data: InterfaceData = None, - iface2_data: InterfaceData = None, - options: LinkOptions = None, - ) -> tuple[CoreInterface, CoreInterface]: - """ - Create a wired link between two nodes. - - :param node1: first node to be linked - :param node2: second node to be linked - :param iface1_data: data to create interface for node1 - :param iface2_data: data to create interface for node2 - :param options: options to configure interfaces with - :return: interfaces created for both nodes - """ - # create interfaces - iface1 = node1.create_iface(iface1_data, options) - iface2 = node2.create_iface(iface2_data, options) - # join and attach to ptp bridge - ptp = self.create_node(PtpNet, self.state.should_start()) - ptp.attach(iface1) - ptp.attach(iface2) - # track link - core_link = CoreLink(node1, iface1, node2, iface2, ptp) - self.link_manager.add(core_link) - # setup link for gre tunnels if needed - if ptp.up: - self.distributed.create_gre_tunnels(core_link) - return iface1, iface2 - def delete_link( - self, node1_id: int, node2_id: int, iface1_id: int = None, iface2_id: int = None + self, + node1_id: int, + node2_id: int, + iface1_id: int = None, + iface2_id: int = None, + link_type: LinkTypes = LinkTypes.WIRED, ) -> None: """ Delete a link between nodes. @@ -382,38 +327,61 @@ class Session: :param node2_id: node two id :param iface1_id: interface id for node one :param iface2_id: interface id for node two + :param link_type: link type to delete :return: nothing :raises core.CoreError: when no common network is found for link being deleted """ node1 = self.get_node(node1_id, NodeBase) node2 = self.get_node(node2_id, NodeBase) - logger.info( - "deleting link node(%s):interface(%s) node(%s):interface(%s)", + logging.info( + "deleting link(%s) node(%s):interface(%s) node(%s):interface(%s)", + link_type.name, node1.name, iface1_id, node2.name, iface2_id, ) - iface1 = None - iface2 = None - if isinstance(node1, (WlanNode, WirelessNode)): - iface2 = node2.delete_iface(iface2_id) - node1.detach(iface2) - elif isinstance(node2, (WlanNode, WirelessNode)): - iface1 = node1.delete_iface(iface1_id) - node2.detach(iface1) - elif isinstance(node1, EmaneNet): - iface2 = node2.delete_iface(iface2_id) - node1.detach(iface2) - elif isinstance(node2, EmaneNet): - iface1 = node1.delete_iface(iface1_id) - node2.detach(iface1) + + # wireless link + if link_type == LinkTypes.WIRELESS: + if isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNodeBase): + self._link_wireless(node1, node2, connect=False) + else: + raise CoreError( + "cannot delete wireless link " + f"node1({type(node1)}) node2({type(node2)})" + ) + # wired link else: - iface1 = node1.delete_iface(iface1_id) - iface2 = node2.delete_iface(iface2_id) - core_link = self.link_manager.delete(node1, iface1, node2, iface2) - if core_link.ptp: - self.delete_node(core_link.ptp.id) + if isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNodeBase): + iface1 = node1.get_iface(iface1_id) + iface2 = node2.get_iface(iface2_id) + if iface1.net != iface2.net: + raise CoreError( + f"node1({node1.name}) node2({node2.name}) " + "not connected to same net" + ) + ptp = iface1.net + node1.delete_iface(iface1_id) + node2.delete_iface(iface2_id) + self.delete_node(ptp.id) + elif isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNetworkBase): + node1.delete_iface(iface1_id) + elif isinstance(node2, CoreNodeBase) and isinstance(node1, CoreNetworkBase): + node2.delete_iface(iface2_id) + elif isinstance(node1, CoreNetworkBase) and isinstance( + node2, CoreNetworkBase + ): + for iface in node1.get_ifaces(control=False): + if iface.othernet == node2: + node1.detach(iface) + iface.shutdown() + break + for iface in node2.get_ifaces(control=False): + if iface.othernet == node1: + node2.detach(iface) + iface.shutdown() + break self.sdt.delete_link(node1_id, node2_id) def update_link( @@ -423,6 +391,7 @@ class Session: iface1_id: int = None, iface2_id: int = None, options: LinkOptions = None, + link_type: LinkTypes = LinkTypes.WIRED, ) -> None: """ Update link information between nodes. @@ -432,6 +401,7 @@ class Session: :param iface1_id: interface id for node one :param iface2_id: interface id for node two :param options: data to update link with + :param link_type: type of link to update :return: nothing :raises core.CoreError: when updating a wireless type link, when there is a unknown link between networks @@ -440,27 +410,72 @@ class Session: options = LinkOptions() node1 = self.get_node(node1_id, NodeBase) node2 = self.get_node(node2_id, NodeBase) - logger.info( - "update link node(%s):interface(%s) node(%s):interface(%s)", + logging.info( + "update link(%s) node(%s):interface(%s) node(%s):interface(%s)", + link_type.name, node1.name, iface1_id, node2.name, iface2_id, ) - iface1 = node1.get_iface(iface1_id) if iface1_id is not None else None - iface2 = node2.get_iface(iface2_id) if iface2_id is not None else None - core_link = self.link_manager.get_link(node1, iface1, node2, iface2) - if not core_link: - raise CoreError( - f"there is no link for node({node1.name}):interface({iface1_id}) " - f"node({node2.name}):interface({iface2_id})" - ) - if iface1: - iface1.options.update(options) - iface1.set_config() - if iface2 and not options.unidirectional: - iface2.options.update(options) - iface2.set_config() + + # wireless link + if link_type == LinkTypes.WIRELESS: + raise CoreError("cannot update wireless link") + else: + if isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNodeBase): + iface1 = node1.ifaces.get(iface1_id) + iface2 = node2.ifaces.get(iface2_id) + if not iface1: + raise CoreError( + f"node({node1.name}) missing interface({iface1_id})" + ) + if not iface2: + raise CoreError( + f"node({node2.name}) missing interface({iface2_id})" + ) + if iface1.net != iface2.net: + raise CoreError( + f"node1({node1.name}) node2({node2.name}) " + "not connected to same net" + ) + ptp = iface1.net + ptp.linkconfig(iface1, options, iface2) + if not options.unidirectional: + ptp.linkconfig(iface2, options, iface1) + elif isinstance(node1, CoreNodeBase) and isinstance(node2, CoreNetworkBase): + iface = node1.get_iface(iface1_id) + node2.linkconfig(iface, options) + elif isinstance(node2, CoreNodeBase) and isinstance(node1, CoreNetworkBase): + iface = node2.get_iface(iface2_id) + node1.linkconfig(iface, options) + elif isinstance(node1, CoreNetworkBase) and isinstance( + node2, CoreNetworkBase + ): + iface = node1.get_linked_iface(node2) + upstream = False + if not iface: + upstream = True + iface = node2.get_linked_iface(node1) + if not iface: + raise CoreError("modify unknown link between nets") + if upstream: + iface.swapparams("_params_up") + node1.linkconfig(iface, options) + iface.swapparams("_params_up") + else: + node1.linkconfig(iface, options) + if not options.unidirectional: + if upstream: + node2.linkconfig(iface, options) + else: + iface.swapparams("_params_up") + node2.linkconfig(iface, options) + iface.swapparams("_params_up") + else: + raise CoreError( + f"cannot update link node1({type(node1)}) node2({type(node2)})" + ) def next_node_id(self) -> int: """ @@ -476,106 +491,194 @@ class Session: return _id def add_node( - self, - _class: type[NT], - _id: int = None, - name: str = None, - server: str = None, - position: Position = None, - options: NodeOptions = None, + self, _class: Type[NT], _id: int = None, options: NodeOptions = None ) -> NT: """ Add a node to the session, based on the provided node data. :param _class: node class to create :param _id: id for node, defaults to None for generated id - :param name: name to assign to node - :param server: distributed server for node, if desired - :param position: geo or x/y/z position to set - :param options: options to create node with + :param options: data to create node with :return: created node :raises core.CoreError: when an invalid node type is given """ # set node start based on current session state, override and check when rj45 start = self.state.should_start() - enable_rj45 = self.options.get_int("enablerj45") == 1 + enable_rj45 = self.options.get_config("enablerj45") == "1" if _class == Rj45Node and not enable_rj45: start = False - # generate options if not provided - options = options if options else _class.create_options() + + # determine node id + if not _id: + _id = self.next_node_id() + + # generate name if not provided + if not options: + options = NodeOptions() + options.set_position(0, 0) + name = options.name + if not name: + name = f"{_class.__name__}{_id}" + # verify distributed server - dist_server = None - if server is not None: - dist_server = self.distributed.servers.get(server) - if not dist_server: - raise CoreError(f"invalid distributed server: {server}") + server = self.distributed.servers.get(options.server) + if options.server is not None and server is None: + raise CoreError(f"invalid distributed server: {options.server}") + # create node - node = self.create_node(_class, start, _id, name, dist_server, options) - # set node position - position = position or Position() - if position.has_geo(): - self.set_node_geo(node, position.lon, position.lat, position.alt) - else: - self.set_node_pos(node, position.x, position.y) - # setup default wlan + logging.info( + "creating node(%s) id(%s) name(%s) start(%s)", + _class.__name__, + _id, + name, + start, + ) + kwargs = dict(_id=_id, name=name, server=server) + if _class in CONTAINER_NODES: + kwargs["image"] = options.image + node = self.create_node(_class, start, **kwargs) + + # set node attributes + node.icon = options.icon + node.canvas = options.canvas + + # set node position and broadcast it + self.set_node_position(node, options) + + # add services to needed nodes + if isinstance(node, (CoreNode, PhysicalNode)): + node.type = options.model + logging.debug("set node type: %s", node.type) + self.services.add_services(node, node.type, options.services) + + # add config services + logging.info("setting node config services: %s", options.config_services) + for name in options.config_services: + service_class = self.service_manager.get_service(name) + node.add_config_service(service_class) + + # ensure default emane configuration + if isinstance(node, EmaneNet) and options.emane: + model = self.emane.models.get(options.emane) + if not model: + raise CoreError( + f"node({node.name}) emane model({options.emane}) does not exist" + ) + node.model = model(self, node.id) + if self.state == EventTypes.RUNTIME_STATE: + self.emane.add_node(node) + # set default wlan config if needed if isinstance(node, WlanNode): - self.mobility.set_model_config(node.id, BasicRangeModel.name) - # boot core nodes after runtime - if self.is_running() and isinstance(node, CoreNode): + self.mobility.set_model_config(_id, BasicRangeModel.name) + + # boot nodes after runtime CoreNodes and PhysicalNodes + is_boot_node = isinstance(node, (CoreNode, PhysicalNode)) + if self.state == EventTypes.RUNTIME_STATE and is_boot_node: + self.write_nodes() self.add_remove_control_iface(node, remove=False) - self.boot_node(node) + self.services.boot_services(node) + self.sdt.add_node(node) return node - def set_node_pos(self, node: NodeBase, x: float, y: float) -> None: - node.setposition(x, y, None) - self.sdt.edit_node( - node, node.position.lon, node.position.lat, node.position.alt - ) + def edit_node(self, node_id: int, options: NodeOptions) -> None: + """ + Edit node information. - def set_node_geo(self, node: NodeBase, lon: float, lat: float, alt: float) -> None: - x, y, _ = self.location.getxyz(lat, lon, alt) - if math.isinf(x) or math.isinf(y): - raise CoreError( - f"invalid geo for current reference/scale: {lon},{lat},{alt}" - ) - node.setposition(x, y, None) - node.position.set_geo(lon, lat, alt) - self.sdt.edit_node(node, lon, lat, alt) + :param node_id: id of node to update + :param options: data to update node with + :return: nothing + :raises core.CoreError: when node to update does not exist + """ + node = self.get_node(node_id, NodeBase) + node.canvas = options.canvas + node.icon = options.icon + self.set_node_position(node, options) + self.sdt.edit_node(node, options.lon, options.lat, options.alt) - def open_xml(self, file_path: Path, start: bool = False) -> None: + def set_node_position(self, node: NodeBase, options: NodeOptions) -> None: + """ + Set position for a node, use lat/lon/alt if needed. + + :param node: node to set position for + :param options: data for node + :return: nothing + """ + # extract location values + x = options.x + y = options.y + lat = options.lat + lon = options.lon + alt = options.alt + + # check if we need to generate position from lat/lon/alt + has_empty_position = all(i is None for i in [x, y]) + has_lat_lon_alt = all(i is not None for i in [lat, lon, alt]) + using_lat_lon_alt = has_empty_position and has_lat_lon_alt + if using_lat_lon_alt: + x, y, _ = self.location.getxyz(lat, lon, alt) + node.setposition(x, y, None) + node.position.set_geo(lon, lat, alt) + self.broadcast_node(node) + elif not has_empty_position: + node.setposition(x, y, None) + + def start_mobility(self, node_ids: List[int] = None) -> None: + """ + Start mobility for the provided node ids. + + :param node_ids: nodes to start mobility for + :return: nothing + """ + self.mobility.startup(node_ids) + + def is_active(self) -> bool: + """ + Determine if this session is considered to be active. + (Runtime or Data collect states) + + :return: True if active, False otherwise + """ + result = self.state in {EventTypes.RUNTIME_STATE, EventTypes.DATACOLLECT_STATE} + logging.info("session(%s) checking if active: %s", self.id, result) + return result + + def open_xml(self, file_name: str, start: bool = False) -> None: """ Import a session from the EmulationScript XML format. - :param file_path: xml file to load session from + :param file_name: xml file to load session from :param start: instantiate session if true, false otherwise :return: nothing """ - logger.info("opening xml: %s", file_path) + logging.info("opening xml: %s", file_name) + # clear out existing session self.clear() + # set state and read xml state = EventTypes.CONFIGURATION_STATE if start else EventTypes.DEFINITION_STATE self.set_state(state) - self.name = file_path.name - self.file_path = file_path - CoreXmlReader(self).read(file_path) + self.name = os.path.basename(file_name) + self.file_name = file_name + CoreXmlReader(self).read(file_name) + # start session if needed if start: self.set_state(EventTypes.INSTANTIATION_STATE) self.instantiate() - def save_xml(self, file_path: Path) -> None: + def save_xml(self, file_name: str) -> None: """ Export a session to the EmulationScript XML format. - :param file_path: file name to write session xml to + :param file_name: file name to write session xml to :return: nothing """ - CoreXmlWriter(self).write(file_path) + CoreXmlWriter(self).write(file_name) def add_hook( - self, state: EventTypes, file_name: str, data: str, src_name: str = None + self, state: EventTypes, file_name: str, data: str, source_name: str = None ) -> None: """ Store a hook from a received file message. @@ -583,11 +686,11 @@ class Session: :param state: when to run hook :param file_name: file name for hook :param data: hook data - :param src_name: source name + :param source_name: source name :return: nothing """ - logger.info( - "setting state hook: %s - %s source(%s)", state, file_name, src_name + logging.info( + "setting state hook: %s - %s source(%s)", state, file_name, source_name ) hook = file_name, data state_hooks = self.hooks.setdefault(state, []) @@ -595,9 +698,27 @@ class Session: # immediately run a hook if it is in the current state if self.state == state: - logger.info("immediately running new state hook") + logging.info("immediately running new state hook") self.run_hook(hook) + def add_node_file( + self, node_id: int, source_name: str, file_name: str, data: str + ) -> None: + """ + Add a file to a node. + + :param node_id: node to add file to + :param source_name: source file name + :param file_name: file name to add + :param data: file data + :return: nothing + """ + node = self.get_node(node_id, CoreNodeBase) + if source_name is not None: + node.addfile(source_name, file_name) + elif data is not None: + node.nodefile(file_name, data) + def clear(self) -> None: """ Clear all CORE session data. (nodes, hooks, etc) @@ -606,7 +727,6 @@ class Session: """ self.emane.shutdown() self.delete_nodes() - self.link_manager.reset() self.distributed.shutdown() self.hooks.clear() self.emane.reset() @@ -616,6 +736,23 @@ class Session: self.mobility.config_reset() self.link_colors.clear() + def start_events(self) -> None: + """ + Start event loop. + + :return: nothing + """ + self.event_loop.run() + + def mobility_event(self, event_data: EventData) -> None: + """ + Handle a mobility event. + + :param event_data: event data to handle + :return: nothing + """ + self.mobility.handleevent(event_data) + def set_location(self, lat: float, lon: float, alt: float, scale: float) -> None: """ Set session geospatial location. @@ -634,18 +771,21 @@ class Session: Shutdown all session nodes and remove the session directory. """ if self.state == EventTypes.SHUTDOWN_STATE: - logger.info("session(%s) state(%s) already shutdown", self.id, self.state) - else: - logger.info("session(%s) state(%s) shutting down", self.id, self.state) - self.set_state(EventTypes.SHUTDOWN_STATE, send_event=True) - # clear out current core session - self.clear() - # shutdown sdt - self.sdt.shutdown() + logging.info("session(%s) state(%s) already shutdown", self.id, self.state) + return + logging.info("session(%s) state(%s) shutting down", self.id, self.state) + self.set_state(EventTypes.SHUTDOWN_STATE, send_event=True) + # clear out current core session + self.clear() + # shutdown sdt + self.sdt.shutdown() # remove this sessions working directory - preserve = self.options.get_int("preservedir") == 1 + preserve = self.options.get_config("preservedir") == "1" if not preserve: - shutil.rmtree(self.directory, ignore_errors=True) + shutil.rmtree(self.session_dir, ignore_errors=True) + # call session shutdown handlers + for handler in self.shutdown_handlers: + handler(self) def broadcast_event(self, event_data: EventData) -> None: """ @@ -681,6 +821,8 @@ class Session: :param source: source of broadcast, None by default :return: nothing """ + if not node.apitype: + return node_data = NodeData(node=node, message_type=message_type, source=source) for handler in self.node_handlers: handler(node_data) @@ -727,13 +869,28 @@ class Session: return self.state = state self.state_time = time.monotonic() - logger.info("changing session(%s) to state %s", self.id, state.name) + logging.info("changing session(%s) to state %s", self.id, state.name) + self.write_state(state) self.run_hooks(state) self.run_state_hooks(state) if send_event: event_data = EventData(event_type=state, time=str(time.monotonic())) self.broadcast_event(event_data) + def write_state(self, state: EventTypes) -> None: + """ + Write the state to a state file in the session dir. + + :param state: state to write to file + :return: nothing + """ + state_file = os.path.join(self.session_dir, "state") + try: + with open(state_file, "w") as f: + f.write(f"{state.value} {state.name}\n") + except IOError: + logging.exception("error writing state file: %s", state.name) + def run_hooks(self, state: EventTypes) -> None: """ Run hook scripts upon changing states. If hooks is not specified, run all hooks @@ -746,7 +903,7 @@ class Session: for hook in hooks: self.run_hook(hook) - def run_hook(self, hook: tuple[str, str]) -> None: + def run_hook(self, hook: Tuple[str, str]) -> None: """ Run a hook. @@ -754,24 +911,24 @@ class Session: :return: nothing """ file_name, data = hook - logger.info("running hook %s", file_name) - file_path = self.directory / file_name - log_path = self.directory / f"{file_name}.log" + logging.info("running hook %s", file_name) + file_path = os.path.join(self.session_dir, file_name) + log_path = os.path.join(self.session_dir, f"{file_name}.log") try: - with file_path.open("w") as f: + with open(file_path, "w") as f: f.write(data) - with log_path.open("w") as f: + with open(log_path, "w") as f: args = ["/bin/sh", file_name] subprocess.check_call( args, stdout=f, stderr=subprocess.STDOUT, close_fds=True, - cwd=self.directory, + cwd=self.session_dir, env=self.get_environment(), ) - except (OSError, subprocess.CalledProcessError): - logger.exception("error running hook: %s", file_path) + except (IOError, subprocess.CalledProcessError): + logging.exception("error running hook: %s", file_path) def run_state_hooks(self, state: EventTypes) -> None: """ @@ -788,7 +945,7 @@ class Session: hook(state) except Exception: message = f"exception occurred when running {state.name} state hook: {hook}" - logger.exception(message) + logging.exception(message) self.exception(ExceptionLevels.ERROR, "Session.run_state_hooks", message) def add_state_hook( @@ -831,12 +988,12 @@ class Session: """ self.emane.poststartup() # create session deployed xml + xml_file_name = os.path.join(self.session_dir, "session-deployed.xml") xml_writer = corexml.CoreXmlWriter(self) corexmldeployment.CoreXmlDeployment(self, xml_writer.scenario) - xml_file_path = self.directory / "session-deployed.xml" - xml_writer.write(xml_file_path) + xml_writer.write(xml_file_name) - def get_environment(self, state: bool = True) -> dict[str, str]: + def get_environment(self, state: bool = True) -> Dict[str, str]: """ Get an environment suitable for a subprocess.Popen call. This is the current process environment with some session-specific @@ -849,32 +1006,48 @@ class Session: env["CORE_PYTHON"] = sys.executable env["SESSION"] = str(self.id) env["SESSION_SHORT"] = self.short_session_id() - env["SESSION_DIR"] = str(self.directory) + env["SESSION_DIR"] = self.session_dir env["SESSION_NAME"] = str(self.name) - env["SESSION_FILENAME"] = str(self.file_path) + env["SESSION_FILENAME"] = str(self.file_name) env["SESSION_USER"] = str(self.user) if state: env["SESSION_STATE"] = str(self.state) # try reading and merging optional environments from: # /etc/core/environment - # /home/user/.coregui/environment + # /home/user/.core/environment # /tmp/pycore./environment - core_env_path = constants.CORE_CONF_DIR / "environment" - session_env_path = self.directory / "environment" + core_env_path = Path(constants.CORE_CONF_DIR) / "environment" + session_env_path = Path(self.session_dir) / "environment" if self.user: user_home_path = Path(f"~{self.user}").expanduser() - user_env = user_home_path / ".coregui" / "environment" - paths = [core_env_path, user_env, session_env_path] + user_env1 = user_home_path / ".core" / "environment" + user_env2 = user_home_path / ".coregui" / "environment" + paths = [core_env_path, user_env1, user_env2, session_env_path] else: paths = [core_env_path, session_env_path] for path in paths: if path.is_file(): try: utils.load_config(path, env) - except OSError: - logger.exception("error reading environment file: %s", path) + except IOError: + logging.exception("error reading environment file: %s", path) return env + def set_thumbnail(self, thumb_file: str) -> None: + """ + Set the thumbnail filename. Move files from /tmp to session dir. + + :param thumb_file: tumbnail file to set for session + :return: nothing + """ + if not os.path.exists(thumb_file): + logging.error("thumbnail file to set does not exist: %s", thumb_file) + self.thumbnail = None + return + destination_file = os.path.join(self.session_dir, os.path.basename(thumb_file)) + shutil.copy(thumb_file, destination_file) + self.thumbnail = destination_file + def set_user(self, user: str) -> None: """ Set the username for this session. Update the permissions of the @@ -883,53 +1056,39 @@ class Session: :param user: user to give write permissions to for the session directory :return: nothing """ + if user: + try: + uid = pwd.getpwnam(user).pw_uid + gid = os.stat(self.session_dir).st_gid + os.chown(self.session_dir, uid, gid) + except IOError: + logging.exception("failed to set permission on %s", self.session_dir) self.user = user - try: - uid = pwd.getpwnam(user).pw_uid - gid = self.directory.stat().st_gid - os.chown(self.directory, uid, gid) - except OSError: - logger.exception("failed to set permission on %s", self.directory) def create_node( - self, - _class: type[NT], - start: bool, - _id: int = None, - name: str = None, - server: str = None, - options: NodeOptions = None, + self, _class: Type[NT], start: bool, *args: Any, **kwargs: Any ) -> NT: """ Create an emulation node. :param _class: node class to create :param start: True to start node, False otherwise - :param _id: id for node, defaults to None for generated id - :param name: name to assign to node - :param server: distributed server for node, if desired - :param options: options to create node with + :param args: list of arguments for the class to create + :param kwargs: dictionary of arguments for the class to create :return: the created node instance :raises core.CoreError: when id of the node to create already exists """ with self.nodes_lock: - node = _class(self, _id=_id, name=name, server=server, options=options) + node = _class(self, *args, **kwargs) if node.id in self.nodes: node.shutdown() raise CoreError(f"duplicate node id {node.id} for {node.name}") self.nodes[node.id] = node - logger.info( - "created node(%s) id(%s) name(%s) start(%s)", - _class.__name__, - node.id, - node.name, - start, - ) if start: node.startup() return node - def get_node(self, _id: int, _class: type[NT]) -> NT: + def get_node(self, _id: int, _class: Type[NT]) -> NT: """ Get a session node. @@ -960,7 +1119,7 @@ class Session: with self.nodes_lock: if _id in self.nodes: node = self.nodes.pop(_id) - logger.info("deleted node(%s)", node.name) + logging.info("deleted node(%s)", node.name) if node: node.shutdown() self.sdt.delete_node(_id) @@ -981,6 +1140,20 @@ class Session: for node_id in nodes_ids: self.sdt.delete_node(node_id) + def write_nodes(self) -> None: + """ + Write nodes to a 'nodes' file in the session dir. + The 'nodes' file lists: number, name, api-type, class-type + """ + file_path = os.path.join(self.session_dir, "nodes") + try: + with self.nodes_lock: + with open(file_path, "w") as f: + for _id, node in self.nodes.items(): + f.write(f"{_id} {node.name} {node.apitype} {type(node)}\n") + except IOError: + logging.exception("error writing nodes file") + def exception( self, level: ExceptionLevels, source: str, text: str, node_id: int = None ) -> None: @@ -1003,7 +1176,7 @@ class Session: ) self.broadcast_exception(exception_data) - def instantiate(self) -> list[Exception]: + def instantiate(self) -> List[Exception]: """ We have entered the instantiation state, invoke startup methods of various managers and boot the nodes. Validate nodes and check @@ -1011,32 +1184,34 @@ class Session: :return: list of service boot errors during startup """ - if self.is_running(): - logger.warning("ignoring instantiate, already in runtime state") - return [] + # write current nodes out to session directory file + self.write_nodes() + # create control net interfaces and network tunnels # which need to exist for emane to sync on location events # in distributed scenarios self.add_remove_control_net(0, remove=False) + # initialize distributed tunnels self.distributed.start() + # instantiate will be invoked again upon emane configure if self.emane.startup() == EmaneState.NOT_READY: return [] + # boot node services and then start mobility exceptions = self.boot_nodes() if not exceptions: - # complete wireless node - for node in self.nodes.values(): - if isinstance(node, WirelessNode): - node.post_startup() self.mobility.startup() + # notify listeners that instantiation is complete event = EventData(event_type=EventTypes.INSTANTIATION_COMPLETE) self.broadcast_event(event) - # startup event loop - self.event_loop.run() - self.set_state(EventTypes.RUNTIME_STATE, send_event=True) + + # assume either all nodes have booted already, or there are some + # nodes on slave servers that will be booted and those servers will + # send a node status response message + self.check_runtime() return exceptions def get_node_count(self) -> int: @@ -1058,6 +1233,28 @@ class Session: count += 1 return count + def check_runtime(self) -> None: + """ + Check if we have entered the runtime state, that all nodes have been + started and the emulation is running. Start the event loop once we + have entered runtime (time=0). + + :return: nothing + """ + # this is called from instantiate() after receiving an event message + # for the instantiation state + logging.debug( + "session(%s) checking if not in runtime state, current state: %s", + self.id, + self.state.name, + ) + if self.state == EventTypes.RUNTIME_STATE: + logging.info("valid runtime state found, returning") + return + # start event loop and set to runtime + self.event_loop.run() + self.set_state(EventTypes.RUNTIME_STATE, send_event=True) + def data_collect(self) -> None: """ Tear down a running session. Stop the event loop and any running @@ -1066,24 +1263,25 @@ class Session: :return: nothing """ if self.state.already_collected(): - logger.info( + logging.info( "session(%s) state(%s) already data collected", self.id, self.state ) return - logger.info("session(%s) state(%s) data collection", self.id, self.state) + logging.info("session(%s) state(%s) data collection", self.id, self.state) self.set_state(EventTypes.DATACOLLECT_STATE, send_event=True) # stop event loop self.event_loop.stop() - # stop mobility and node services + # stop node services with self.nodes_lock: funcs = [] - for node in self.nodes.values(): - if isinstance(node, CoreNodeBase) and node.up: - args = (node,) - funcs.append((self.services.stop_services, args, {})) - funcs.append((node.stop_config_services, (), {})) + for node_id in self.nodes: + node = self.nodes[node_id] + if not isinstance(node, CoreNodeBase) or not node.up: + continue + args = (node,) + funcs.append((self.services.stop_services, args, {})) utils.threadpool(funcs) # shutdown emane @@ -1114,16 +1312,11 @@ class Session: :param node: node to boot :return: nothing """ - logger.info( - "booting node(%s): config services(%s) services(%s)", - node.name, - ", ".join(node.config_services.keys()), - ", ".join(x.name for x in node.services), - ) + logging.info("booting node(%s): %s", node.name, [x.name for x in node.services]) self.services.boot_services(node) node.start_config_services() - def boot_nodes(self) -> list[Exception]: + def boot_nodes(self) -> List[Exception]: """ Invoke the boot() procedure for all nodes and send back node messages to the GUI for node messages that had the status @@ -1135,43 +1328,43 @@ class Session: funcs = [] start = time.monotonic() for node in self.nodes.values(): - if isinstance(node, CoreNode): + if isinstance(node, (CoreNode, PhysicalNode)): self.add_remove_control_iface(node, remove=False) funcs.append((self.boot_node, (node,), {})) results, exceptions = utils.threadpool(funcs) total = time.monotonic() - start - logger.debug("boot run time: %s", total) + logging.debug("boot run time: %s", total) if not exceptions: self.update_control_iface_hosts() return exceptions - def get_control_net_prefixes(self) -> list[str]: + def get_control_net_prefixes(self) -> List[str]: """ Retrieve control net prefixes. :return: control net prefix list """ - p = self.options.get("controlnet") - p0 = self.options.get("controlnet0") - p1 = self.options.get("controlnet1") - p2 = self.options.get("controlnet2") - p3 = self.options.get("controlnet3") + p = self.options.get_config("controlnet") + p0 = self.options.get_config("controlnet0") + p1 = self.options.get_config("controlnet1") + p2 = self.options.get_config("controlnet2") + p3 = self.options.get_config("controlnet3") if not p0 and p: p0 = p return [p0, p1, p2, p3] - def get_control_net_server_ifaces(self) -> list[str]: + def get_control_net_server_ifaces(self) -> List[str]: """ Retrieve control net server interfaces. :return: list of control net server interfaces """ - d0 = self.options.get("controlnetif0") + d0 = self.options.get_config("controlnetif0") if d0: - logger.error("controlnet0 cannot be assigned with a host interface") - d1 = self.options.get("controlnetif1") - d2 = self.options.get("controlnetif2") - d3 = self.options.get("controlnetif3") + logging.error("controlnet0 cannot be assigned with a host interface") + d1 = self.options.get_config("controlnetif1") + d2 = self.options.get_config("controlnetif2") + d3 = self.options.get_config("controlnetif3") return [None, d1, d2, d3] def get_control_net_index(self, dev: str) -> int: @@ -1213,7 +1406,7 @@ class Session: :param conf_required: flag to check if conf is required :return: control net node """ - logger.debug( + logging.debug( "add/remove control net: index(%s) remove(%s) conf_required(%s)", net_index, remove, @@ -1227,7 +1420,7 @@ class Session: return None else: prefix_spec = CtrlNet.DEFAULT_PREFIX_LIST[net_index] - logger.debug("prefix spec: %s", prefix_spec) + logging.debug("prefix spec: %s", prefix_spec) server_iface = self.get_control_net_server_ifaces()[net_index] # return any existing controlnet bridge @@ -1246,10 +1439,11 @@ class Session: # use the updown script for control net 0 only. updown_script = None + if net_index == 0: - updown_script = self.options.get("controlnet_updown_script") or None + updown_script = self.options.get_config("controlnet_updown_script") if not updown_script: - logger.debug("controlnet updown script not configured") + logging.debug("controlnet updown script not configured") prefixes = prefix_spec.split() if len(prefixes) > 1: @@ -1263,25 +1457,28 @@ class Session: else: prefix = prefixes[0] - logger.info( + logging.info( "controlnet(%s) prefix(%s) updown(%s) serverintf(%s)", _id, prefix, updown_script, server_iface, ) - options = CtrlNet.create_options() - options.prefix = prefix - options.updown_script = updown_script - options.serverintf = server_iface - control_net = self.create_node(CtrlNet, False, _id, options=options) + control_net = self.create_node( + CtrlNet, + start=False, + prefix=prefix, + _id=_id, + updown_script=updown_script, + serverintf=server_iface, + ) control_net.brname = f"ctrl{net_index}.{self.short_session_id()}" control_net.startup() return control_net def add_remove_control_iface( self, - node: CoreNode, + node: Union[CoreNode, PhysicalNode], net_index: int = 0, remove: bool = False, conf_required: bool = True, @@ -1316,16 +1513,14 @@ class Session: mac=utils.random_mac(), ip4=ip4, ip4_mask=ip4_mask, - mtu=DEFAULT_MTU, ) - iface = node.create_iface(iface_data) - control_net.attach(iface) + iface = node.new_iface(control_net, iface_data) iface.control = True except ValueError: msg = f"Control interface not added to node {node.id}. " msg += f"Invalid control network prefix ({control_net.prefix}). " msg += "A longer prefix length may be required for this many nodes." - logger.exception(msg) + logging.exception(msg) def update_control_iface_hosts( self, net_index: int = 0, remove: bool = False @@ -1337,18 +1532,18 @@ class Session: :param remove: flag to check if it should be removed :return: nothing """ - if not self.options.get_bool("update_etc_hosts", False): + if not self.options.get_config_bool("update_etc_hosts", default=False): return try: control_net = self.get_control_net(net_index) except CoreError: - logger.exception("error retrieving control net node") + logging.exception("error retrieving control net node") return header = f"CORE session {self.id} host entries" if remove: - logger.info("Removing /etc/hosts file entries.") + logging.info("Removing /etc/hosts file entries.") utils.file_demunge("/etc/hosts", header) return @@ -1358,7 +1553,7 @@ class Session: for ip in iface.ips(): entries.append(f"{ip.ip} {name}") - logger.info("Adding %d /etc/hosts file entries.", len(entries)) + logging.info("Adding %d /etc/hosts file entries.", len(entries)) utils.file_munge("/etc/hosts", header, "\n".join(entries) + "\n") def runtime(self) -> float: @@ -1366,7 +1561,7 @@ class Session: Return the current time we have been in the runtime state, or zero if not in runtime. """ - if self.is_running(): + if self.state == EventTypes.RUNTIME_STATE: return time.monotonic() - self.state_time else: return 0.0 @@ -1387,7 +1582,7 @@ class Session: current_time = self.runtime() if current_time > 0: if event_time <= current_time: - logger.warning( + logging.warning( "could not schedule past event for time %s (run time is now %s)", event_time, current_time, @@ -1399,7 +1594,7 @@ class Session: ) if not name: name = "" - logger.info( + logging.info( "scheduled event %s at time %s data=%s", name, event_time + current_time, @@ -1418,12 +1613,12 @@ class Session: :return: nothing """ if data is None: - logger.warning("no data for event node(%s) name(%s)", node_id, name) + logging.warning("no data for event node(%s) name(%s)", node_id, name) return now = self.runtime() if not name: name = "" - logger.info("running event %s at time %s cmd=%s", name, now, data) + logging.info("running event %s at time %s cmd=%s", name, now, data) if not node_id: utils.mute_detach(data) else: @@ -1443,11 +1638,3 @@ class Session: color = LINK_COLORS[index] self.link_colors[network_id] = color return color - - def is_running(self) -> bool: - """ - Convenience for checking if this session is in the runtime state. - - :return: True if in the runtime state, False otherwise - """ - return self.state == EventTypes.RUNTIME_STATE diff --git a/daemon/core/emulator/sessionconfig.py b/daemon/core/emulator/sessionconfig.py index b6d5bcd3..9b22bcc7 100644 --- a/daemon/core/emulator/sessionconfig.py +++ b/daemon/core/emulator/sessionconfig.py @@ -1,87 +1,93 @@ -from typing import Optional +from typing import Any, List -from core.config import ConfigBool, ConfigInt, ConfigString, Configuration -from core.errors import CoreError +from core.config import ConfigurableManager, ConfigurableOptions, Configuration +from core.emulator.enumerations import ConfigDataTypes, RegisterTlvs from core.plugins.sdt import Sdt -class SessionConfig: +class SessionConfig(ConfigurableManager, ConfigurableOptions): """ Provides session configuration. """ - options: list[Configuration] = [ - ConfigString(id="controlnet", label="Control Network"), - ConfigString(id="controlnet0", label="Control Network 0"), - ConfigString(id="controlnet1", label="Control Network 1"), - ConfigString(id="controlnet2", label="Control Network 2"), - ConfigString(id="controlnet3", label="Control Network 3"), - ConfigString(id="controlnet_updown_script", label="Control Network Script"), - ConfigBool(id="enablerj45", default="1", label="Enable RJ45s"), - ConfigBool(id="preservedir", default="0", label="Preserve session dir"), - ConfigBool(id="enablesdt", default="0", label="Enable SDT3D output"), - ConfigString(id="sdturl", default=Sdt.DEFAULT_SDT_URL, label="SDT3D URL"), - ConfigBool(id="ovs", default="0", label="Enable OVS"), - ConfigInt(id="platform_id_start", default="1", label="EMANE Platform ID Start"), - ConfigInt(id="nem_id_start", default="1", label="EMANE NEM ID Start"), - ConfigBool(id="link_enabled", default="1", label="EMANE Links?"), - ConfigInt( - id="loss_threshold", default="30", label="EMANE Link Loss Threshold (%)" + name: str = "session" + options: List[Configuration] = [ + Configuration( + _id="controlnet", _type=ConfigDataTypes.STRING, label="Control Network" ), - ConfigInt( - id="link_interval", default="1", label="EMANE Link Check Interval (sec)" + Configuration( + _id="controlnet0", _type=ConfigDataTypes.STRING, label="Control Network 0" + ), + Configuration( + _id="controlnet1", _type=ConfigDataTypes.STRING, label="Control Network 1" + ), + Configuration( + _id="controlnet2", _type=ConfigDataTypes.STRING, label="Control Network 2" + ), + Configuration( + _id="controlnet3", _type=ConfigDataTypes.STRING, label="Control Network 3" + ), + Configuration( + _id="controlnet_updown_script", + _type=ConfigDataTypes.STRING, + label="Control Network Script", + ), + Configuration( + _id="enablerj45", + _type=ConfigDataTypes.BOOL, + default="1", + label="Enable RJ45s", + ), + Configuration( + _id="preservedir", + _type=ConfigDataTypes.BOOL, + default="0", + label="Preserve session dir", + ), + Configuration( + _id="enablesdt", + _type=ConfigDataTypes.BOOL, + default="0", + label="Enable SDT3D output", + ), + Configuration( + _id="sdturl", + _type=ConfigDataTypes.STRING, + default=Sdt.DEFAULT_SDT_URL, + label="SDT3D URL", + ), + Configuration( + _id="ovs", _type=ConfigDataTypes.BOOL, default="0", label="Enable OVS" ), - ConfigInt(id="link_timeout", default="4", label="EMANE Link Timeout (sec)"), - ConfigInt(id="mtu", default="0", label="MTU for All Devices"), ] + config_type: RegisterTlvs = RegisterTlvs.UTILITY - def __init__(self, config: dict[str, str] = None) -> None: + def __init__(self) -> None: + super().__init__() + self.set_configs(self.default_values()) + + def get_config( + self, + _id: str, + node_id: int = ConfigurableManager._default_node, + config_type: str = ConfigurableManager._default_type, + default: Any = None, + ) -> str: """ - Create a SessionConfig instance. + Retrieves a specific configuration for a node and configuration type. - :param config: configuration to initialize with + :param _id: specific configuration to retrieve + :param node_id: node id to store configuration for + :param config_type: configuration type to store configuration for + :param default: default value to return when value is not found + :return: configuration value """ - self._config: dict[str, str] = {x.id: x.default for x in self.options} - self._config.update(config or {}) + value = super().get_config(_id, node_id, config_type, default) + if value == "": + value = default + return value - def update(self, config: dict[str, str]) -> None: - """ - Update current configuration with provided values. - - :param config: configuration to update with - :return: nothing - """ - self._config.update(config) - - def set(self, name: str, value: str) -> None: - """ - Set a configuration value. - - :param name: name of configuration to set - :param value: value to set - :return: nothing - """ - self._config[name] = value - - def get(self, name: str, default: str = None) -> Optional[str]: - """ - Retrieve configuration value. - - :param name: name of configuration to get - :param default: value to return as default - :return: return found configuration value or default - """ - return self._config.get(name, default) - - def all(self) -> dict[str, str]: - """ - Retrieve all configuration options. - - :return: configuration value dict - """ - return self._config - - def get_bool(self, name: str, default: bool = None) -> bool: + def get_config_bool(self, name: str, default: Any = None) -> bool: """ Get configuration value as a boolean. @@ -89,15 +95,12 @@ class SessionConfig: :param default: default value if not found :return: boolean for configuration value """ - value = self._config.get(name) - if value is None and default is None: - raise CoreError(f"missing session options for {name}") + value = self.get_config(name) if value is None: return default - else: - return value.lower() == "true" + return value.lower() == "true" - def get_int(self, name: str, default: int = None) -> int: + def get_config_int(self, name: str, default: Any = None) -> int: """ Get configuration value as int. @@ -105,10 +108,7 @@ class SessionConfig: :param default: default value if not found :return: int for configuration value """ - value = self._config.get(name) - if value is None and default is None: - raise CoreError(f"missing session options for {name}") - if value is None: - return default - else: - return int(value) + value = self.get_config(name, default=default) + if value is not None: + value = int(value) + return value diff --git a/daemon/core/errors.py b/daemon/core/errors.py index 83d252b8..a75bd536 100644 --- a/daemon/core/errors.py +++ b/daemon/core/errors.py @@ -11,7 +11,7 @@ class CoreCommandError(subprocess.CalledProcessError): def __str__(self) -> str: return ( - f"command({self.cmd}), status({self.returncode}):\n" + f"Command({self.cmd}), Status({self.returncode}):\n" f"stdout: {self.output}\nstderr: {self.stderr}" ) @@ -46,11 +46,3 @@ class CoreServiceBootError(Exception): """ pass - - -class CoreConfigError(Exception): - """ - Used when there is an error defining a configurable option. - """ - - pass diff --git a/daemon/core/executables.py b/daemon/core/executables.py index f04d88de..16f159fc 100644 --- a/daemon/core/executables.py +++ b/daemon/core/executables.py @@ -1,33 +1,34 @@ -BASH: str = "bash" -ETHTOOL: str = "ethtool" -IP: str = "ip" -MOUNT: str = "mount" -NFTABLES: str = "nft" -OVS_VSCTL: str = "ovs-vsctl" -SYSCTL: str = "sysctl" -TC: str = "tc" -TEST: str = "test" -UMOUNT: str = "umount" -VCMD: str = "vcmd" -VNODED: str = "vnoded" +from typing import List -COMMON_REQUIREMENTS: list[str] = [ +BASH: str = "bash" +VNODED: str = "vnoded" +VCMD: str = "vcmd" +SYSCTL: str = "sysctl" +IP: str = "ip" +ETHTOOL: str = "ethtool" +TC: str = "tc" +EBTABLES: str = "ebtables" +MOUNT: str = "mount" +UMOUNT: str = "umount" +OVS_VSCTL: str = "ovs-vsctl" +TEST: str = "test" + +COMMON_REQUIREMENTS: List[str] = [ BASH, + EBTABLES, ETHTOOL, IP, MOUNT, - NFTABLES, SYSCTL, TC, - TEST, UMOUNT, - VCMD, - VNODED, + TEST, ] -OVS_REQUIREMENTS: list[str] = [OVS_VSCTL] +VCMD_REQUIREMENTS: List[str] = [VNODED, VCMD] +OVS_REQUIREMENTS: List[str] = [OVS_VSCTL] -def get_requirements(use_ovs: bool) -> list[str]: +def get_requirements(use_ovs: bool) -> List[str]: """ Retrieve executable requirements needed to run CORE. @@ -37,4 +38,6 @@ def get_requirements(use_ovs: bool) -> list[str]: requirements = COMMON_REQUIREMENTS if use_ovs: requirements += OVS_REQUIREMENTS + else: + requirements += VCMD_REQUIREMENTS return requirements diff --git a/daemon/core/gui/app.py b/daemon/core/gui/app.py index 4fd1dce5..be744bb4 100644 --- a/daemon/core/gui/app.py +++ b/daemon/core/gui/app.py @@ -1,28 +1,26 @@ import logging import math import tkinter as tk -from tkinter import PhotoImage, font, messagebox, ttk +from tkinter import PhotoImage, font, ttk from tkinter.ttk import Progressbar -from typing import Any, Optional +from typing import Any, Dict, Optional, Type import grpc -from core.gui import appconfig, images -from core.gui import nodeutils as nutils -from core.gui import themes +from core.gui import appconfig, themes from core.gui.appconfig import GuiConfig from core.gui.coreclient import CoreClient from core.gui.dialogs.error import ErrorDialog from core.gui.frames.base import InfoFrameBase from core.gui.frames.default import DefaultInfoFrame -from core.gui.graph.manager import CanvasManager -from core.gui.images import ImageEnum +from core.gui.graph.graph import CanvasGraph +from core.gui.images import ImageEnum, Images from core.gui.menubar import Menubar +from core.gui.nodeutils import NodeUtils from core.gui.statusbar import StatusBar from core.gui.themes import PADY from core.gui.toolbar import Toolbar -logger = logging.getLogger(__name__) WIDTH: int = 1000 HEIGHT: int = 800 @@ -31,13 +29,13 @@ class Application(ttk.Frame): def __init__(self, proxy: bool, session_id: int = None) -> None: super().__init__() # load node icons - nutils.setup() + NodeUtils.setup() # widgets self.menubar: Optional[Menubar] = None self.toolbar: Optional[Toolbar] = None self.right_frame: Optional[ttk.Frame] = None - self.manager: Optional[CanvasManager] = None + self.canvas: Optional[CanvasGraph] = None self.statusbar: Optional[StatusBar] = None self.progress: Optional[Progressbar] = None self.infobar: Optional[ttk.Frame] = None @@ -45,7 +43,7 @@ class Application(ttk.Frame): self.show_infobar: tk.BooleanVar = tk.BooleanVar(value=False) # fonts - self.fonts_size: dict[str, int] = {} + self.fonts_size: Dict[str, int] = {} self.icon_text_font: Optional[font.Font] = None self.edge_font: Optional[font.Font] = None @@ -79,7 +77,7 @@ class Application(ttk.Frame): self.master.title("CORE") self.center() self.master.protocol("WM_DELETE_WINDOW", self.on_closing) - image = images.from_enum(ImageEnum.CORE, width=images.DIALOG_SIZE) + image = Images.get(ImageEnum.CORE, 16) self.master.tk.call("wm", "iconphoto", self.master._w, image) self.master.option_add("*tearOff", tk.FALSE) self.setup_file_dialogs() @@ -138,14 +136,26 @@ class Application(ttk.Frame): label.grid(sticky=tk.EW, pady=PADY) def draw_canvas(self) -> None: - self.manager = CanvasManager(self.right_frame, self, self.core) - self.manager.notebook.grid(sticky=tk.NSEW) + canvas_frame = ttk.Frame(self.right_frame) + canvas_frame.rowconfigure(0, weight=1) + canvas_frame.columnconfigure(0, weight=1) + canvas_frame.grid(row=0, column=0, sticky=tk.NSEW, pady=1) + self.canvas = CanvasGraph(canvas_frame, self, self.core) + self.canvas.grid(sticky=tk.NSEW) + scroll_y = ttk.Scrollbar(canvas_frame, command=self.canvas.yview) + scroll_y.grid(row=0, column=1, sticky=tk.NS) + scroll_x = ttk.Scrollbar( + canvas_frame, orient=tk.HORIZONTAL, command=self.canvas.xview + ) + scroll_x.grid(row=1, column=0, sticky=tk.EW) + self.canvas.configure(xscrollcommand=scroll_x.set) + self.canvas.configure(yscrollcommand=scroll_y.set) def draw_status(self) -> None: self.statusbar = StatusBar(self.right_frame, self) self.statusbar.grid(sticky=tk.EW, columnspan=2) - def display_info(self, frame_class: type[InfoFrameBase], **kwargs: Any) -> None: + def display_info(self, frame_class: Type[InfoFrameBase], **kwargs: Any) -> None: if not self.show_infobar.get(): return self.clear_info() @@ -169,30 +179,17 @@ class Application(ttk.Frame): def hide_info(self) -> None: self.infobar.grid_forget() - def show_grpc_exception( - self, message: str, e: grpc.RpcError, blocking: bool = False - ) -> None: - logger.exception("app grpc exception", exc_info=e) - dialog = ErrorDialog(self, "GRPC Exception", message, e.details()) - if blocking: - dialog.show() - else: - self.after(0, lambda: dialog.show()) + def show_grpc_exception(self, title: str, e: grpc.RpcError) -> None: + logging.exception("app grpc exception", exc_info=e) + message = e.details() + self.show_error(title, message) - def show_exception(self, message: str, e: Exception) -> None: - logger.exception("app exception", exc_info=e) - self.after( - 0, lambda: ErrorDialog(self, "App Exception", message, str(e)).show() - ) + def show_exception(self, title: str, e: Exception) -> None: + logging.exception("app exception", exc_info=e) + self.show_error(title, str(e)) - def show_exception_data(self, title: str, message: str, details: str) -> None: - self.after(0, lambda: ErrorDialog(self, title, message, details).show()) - - def show_error(self, title: str, message: str, blocking: bool = False) -> None: - if blocking: - messagebox.showerror(title, message, parent=self) - else: - self.after(0, lambda: messagebox.showerror(title, message, parent=self)) + def show_error(self, title: str, message: str) -> None: + self.after(0, lambda: ErrorDialog(self, title, message).show()) def on_closing(self) -> None: if self.toolbar.picker: @@ -204,17 +201,15 @@ class Application(ttk.Frame): def joined_session_update(self) -> None: if self.core.is_runtime(): - self.menubar.set_state(is_runtime=True) self.toolbar.set_runtime() else: - self.menubar.set_state(is_runtime=False) self.toolbar.set_design() - def get_enum_icon(self, image_enum: ImageEnum, *, width: int) -> PhotoImage: - return images.from_enum(image_enum, width=width, scale=self.app_scale) + def get_icon(self, image_enum: ImageEnum, width: int) -> PhotoImage: + return Images.get(image_enum, int(width * self.app_scale)) - def get_file_icon(self, file_path: str, *, width: int) -> PhotoImage: - return images.from_file(file_path, width=width, scale=self.app_scale) + def get_custom_icon(self, image_file: str, width: int) -> PhotoImage: + return Images.get_custom(image_file, int(width * self.app_scale)) def close(self) -> None: self.master.destroy() diff --git a/daemon/core/gui/appconfig.py b/daemon/core/gui/appconfig.py index 0a5ae76b..6bc213eb 100644 --- a/daemon/core/gui/appconfig.py +++ b/daemon/core/gui/appconfig.py @@ -1,7 +1,7 @@ import os import shutil from pathlib import Path -from typing import Optional +from typing import Dict, List, Optional, Type import yaml @@ -26,7 +26,7 @@ LOCAL_XMLS_PATH: Path = DATA_PATH.joinpath("xmls").absolute() LOCAL_MOBILITY_PATH: Path = DATA_PATH.joinpath("mobility").absolute() # configuration data -TERMINALS: dict[str, str] = { +TERMINALS: Dict[str, str] = { "xterm": "xterm -e", "aterm": "aterm -e", "eterm": "eterm -e", @@ -36,7 +36,7 @@ TERMINALS: dict[str, str] = { "xfce4-terminal": "xfce4-terminal -x", "gnome-terminal": "gnome-terminal --window --", } -EDITORS: list[str] = ["$EDITOR", "vim", "emacs", "gedit", "nano", "vi"] +EDITORS: List[str] = ["$EDITOR", "vim", "emacs", "gedit", "nano", "vi"] class IndentDumper(yaml.Dumper): @@ -46,17 +46,17 @@ class IndentDumper(yaml.Dumper): class CustomNode(yaml.YAMLObject): yaml_tag: str = "!CustomNode" - yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader + yaml_loader: Type[yaml.SafeLoader] = yaml.SafeLoader - def __init__(self, name: str, image: str, services: list[str]) -> None: + def __init__(self, name: str, image: str, services: List[str]) -> None: self.name: str = name self.image: str = image - self.services: list[str] = services + self.services: List[str] = services class CoreServer(yaml.YAMLObject): yaml_tag: str = "!CoreServer" - yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader + yaml_loader: Type[yaml.SafeLoader] = yaml.SafeLoader def __init__(self, name: str, address: str) -> None: self.name: str = name @@ -65,7 +65,7 @@ class CoreServer(yaml.YAMLObject): class Observer(yaml.YAMLObject): yaml_tag: str = "!Observer" - yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader + yaml_loader: Type[yaml.SafeLoader] = yaml.SafeLoader def __init__(self, name: str, cmd: str) -> None: self.name: str = name @@ -74,7 +74,7 @@ class Observer(yaml.YAMLObject): class PreferencesConfig(yaml.YAMLObject): yaml_tag: str = "!PreferencesConfig" - yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader + yaml_loader: Type[yaml.SafeLoader] = yaml.SafeLoader def __init__( self, @@ -95,7 +95,7 @@ class PreferencesConfig(yaml.YAMLObject): class LocationConfig(yaml.YAMLObject): yaml_tag: str = "!LocationConfig" - yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader + yaml_loader: Type[yaml.SafeLoader] = yaml.SafeLoader def __init__( self, @@ -118,34 +118,41 @@ class LocationConfig(yaml.YAMLObject): class IpConfigs(yaml.YAMLObject): yaml_tag: str = "!IpConfigs" - yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader + yaml_loader: Type[yaml.SafeLoader] = yaml.SafeLoader - def __init__(self, **kwargs) -> None: - self.__setstate__(kwargs) - - def __setstate__(self, kwargs): - self.ip4s: list[str] = kwargs.get( - "ip4s", ["10.0.0.0", "192.168.0.0", "172.16.0.0"] - ) - self.ip4: str = kwargs.get("ip4", self.ip4s[0]) - self.ip6s: list[str] = kwargs.get("ip6s", ["2001::", "2002::", "a::"]) - self.ip6: str = kwargs.get("ip6", self.ip6s[0]) - self.enable_ip4: bool = kwargs.get("enable_ip4", True) - self.enable_ip6: bool = kwargs.get("enable_ip6", True) + def __init__( + self, + ip4: str = None, + ip6: str = None, + ip4s: List[str] = None, + ip6s: List[str] = None, + ) -> None: + if ip4s is None: + ip4s = ["10.0.0.0", "192.168.0.0", "172.16.0.0"] + self.ip4s: List[str] = ip4s + if ip6s is None: + ip6s = ["2001::", "2002::", "a::"] + self.ip6s: List[str] = ip6s + if ip4 is None: + ip4 = self.ip4s[0] + self.ip4: str = ip4 + if ip6 is None: + ip6 = self.ip6s[0] + self.ip6: str = ip6 class GuiConfig(yaml.YAMLObject): yaml_tag: str = "!GuiConfig" - yaml_loader: type[yaml.SafeLoader] = yaml.SafeLoader + yaml_loader: Type[yaml.SafeLoader] = yaml.SafeLoader def __init__( self, preferences: PreferencesConfig = None, location: LocationConfig = None, - servers: list[CoreServer] = None, - nodes: list[CustomNode] = None, - recentfiles: list[str] = None, - observers: list[Observer] = None, + servers: List[CoreServer] = None, + nodes: List[CustomNode] = None, + recentfiles: List[str] = None, + observers: List[Observer] = None, scale: float = 1.0, ips: IpConfigs = None, mac: str = "00:00:00:aa:00:00", @@ -158,16 +165,16 @@ class GuiConfig(yaml.YAMLObject): self.location: LocationConfig = location if servers is None: servers = [] - self.servers: list[CoreServer] = servers + self.servers: List[CoreServer] = servers if nodes is None: nodes = [] - self.nodes: list[CustomNode] = nodes + self.nodes: List[CustomNode] = nodes if recentfiles is None: recentfiles = [] - self.recentfiles: list[str] = recentfiles + self.recentfiles: List[str] = recentfiles if observers is None: observers = [] - self.observers: list[Observer] = observers + self.observers: List[Observer] = observers self.scale: float = scale if ips is None: ips = IpConfigs() @@ -178,8 +185,7 @@ class GuiConfig(yaml.YAMLObject): def copy_files(current_path: Path, new_path: Path) -> None: for current_file in current_path.glob("*"): new_file = new_path.joinpath(current_file.name) - if not new_file.exists(): - shutil.copy(current_file, new_file) + shutil.copy(current_file, new_file) def find_terminal() -> Optional[str]: @@ -191,32 +197,35 @@ def find_terminal() -> Optional[str]: def check_directory() -> None: - HOME_PATH.mkdir(exist_ok=True) - BACKGROUNDS_PATH.mkdir(exist_ok=True) - CUSTOM_EMANE_PATH.mkdir(exist_ok=True) - CUSTOM_SERVICE_PATH.mkdir(exist_ok=True) - ICONS_PATH.mkdir(exist_ok=True) - MOBILITY_PATH.mkdir(exist_ok=True) - XMLS_PATH.mkdir(exist_ok=True) - SCRIPT_PATH.mkdir(exist_ok=True) + if HOME_PATH.exists(): + return + HOME_PATH.mkdir() + BACKGROUNDS_PATH.mkdir() + CUSTOM_EMANE_PATH.mkdir() + CUSTOM_SERVICE_PATH.mkdir() + ICONS_PATH.mkdir() + MOBILITY_PATH.mkdir() + XMLS_PATH.mkdir() + SCRIPT_PATH.mkdir() + copy_files(LOCAL_ICONS_PATH, ICONS_PATH) copy_files(LOCAL_BACKGROUND_PATH, BACKGROUNDS_PATH) copy_files(LOCAL_XMLS_PATH, XMLS_PATH) copy_files(LOCAL_MOBILITY_PATH, MOBILITY_PATH) - if not CONFIG_PATH.exists(): - terminal = find_terminal() - if "EDITOR" in os.environ: - editor = EDITORS[0] - else: - editor = EDITORS[1] - preferences = PreferencesConfig(editor, terminal) - config = GuiConfig(preferences=preferences) - save(config) + + terminal = find_terminal() + if "EDITOR" in os.environ: + editor = EDITORS[0] + else: + editor = EDITORS[1] + preferences = PreferencesConfig(editor, terminal) + config = GuiConfig(preferences=preferences) + save(config) def read() -> GuiConfig: with CONFIG_PATH.open("r") as f: - return yaml.safe_load(f) + return yaml.load(f, Loader=yaml.SafeLoader) def save(config: GuiConfig) -> None: diff --git a/daemon/core/gui/coreclient.py b/daemon/core/gui/coreclient.py index da2ca6d6..01225c6b 100644 --- a/daemon/core/gui/coreclient.py +++ b/daemon/core/gui/coreclient.py @@ -6,21 +6,26 @@ import json import logging import os import tkinter as tk -from collections.abc import Iterable from pathlib import Path from tkinter import messagebox -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple import grpc -from core.api.grpc import client, configservices_pb2, core_pb2 +from core.api.grpc import ( + client, + configservices_pb2, + core_pb2, + emane_pb2, + mobility_pb2, + services_pb2, + wlan_pb2, +) from core.api.grpc.wrappers import ( ConfigOption, ConfigService, - ConfigServiceDefaults, - EmaneModelConfig, - Event, ExceptionEvent, + Interface, Link, LinkEvent, LinkType, @@ -30,25 +35,23 @@ from core.api.grpc.wrappers import ( NodeServiceData, NodeType, Position, - Server, - ServiceConfig, - ServiceFileConfig, Session, SessionLocation, SessionState, ThroughputsEvent, ) -from core.gui import nodeutils as nutils -from core.gui.appconfig import XMLS_PATH, CoreServer, Observer +from core.gui import appconfig +from core.gui.appconfig import BACKGROUNDS_PATH, XMLS_PATH, CoreServer, Observer from core.gui.dialogs.emaneinstall import EmaneInstallDialog +from core.gui.dialogs.error import ErrorDialog from core.gui.dialogs.mobilityplayer import MobilityPlayer from core.gui.dialogs.sessions import SessionsDialog from core.gui.graph.edges import CanvasEdge from core.gui.graph.node import CanvasNode +from core.gui.graph.shape import AnnotationData, Shape +from core.gui.graph.shapeutils import ShapeType from core.gui.interface import InterfaceManager -from core.gui.nodeutils import NodeDraw - -logger = logging.getLogger(__name__) +from core.gui.nodeutils import NodeDraw, NodeUtils if TYPE_CHECKING: from core.gui.app import Application @@ -57,7 +60,7 @@ GUI_SOURCE = "gui" CPU_USAGE_DELAY = 3 -def to_dict(config: dict[str, ConfigOption]) -> dict[str, str]: +def to_dict(config: Dict[str, ConfigOption]) -> Dict[str, str]: return {x: y.value for x, y in config.items()} @@ -72,30 +75,26 @@ class CoreClient: self.session: Optional[Session] = None self.user = getpass.getuser() - # menu options - self.show_throughputs: tk.BooleanVar = tk.BooleanVar(value=False) - # global service settings - self.services: dict[str, set[str]] = {} - self.config_services_groups: dict[str, set[str]] = {} - self.config_services: dict[str, ConfigService] = {} + self.services: Dict[str, Set[str]] = {} + self.config_services_groups: Dict[str, Set[str]] = {} + self.config_services: Dict[str, ConfigService] = {} # loaded configuration data - self.emane_models: list[str] = [] - self.servers: dict[str, CoreServer] = {} - self.custom_nodes: dict[str, NodeDraw] = {} - self.custom_observers: dict[str, Observer] = {} + self.servers: Dict[str, CoreServer] = {} + self.custom_nodes: Dict[str, NodeDraw] = {} + self.custom_observers: Dict[str, Observer] = {} self.read_config() # helpers - self.iface_to_edge: dict[tuple[int, ...], CanvasEdge] = {} + self.iface_to_edge: Dict[Tuple[int, ...], CanvasEdge] = {} self.ifaces_manager: InterfaceManager = InterfaceManager(self.app) self.observer: Optional[str] = None # session data - self.mobility_players: dict[int, MobilityPlayer] = {} - self.canvas_nodes: dict[int, CanvasNode] = {} - self.links: dict[str, CanvasEdge] = {} + self.mobility_players: Dict[int, MobilityPlayer] = {} + self.canvas_nodes: Dict[int, CanvasNode] = {} + self.links: Dict[str, CanvasEdge] = {} self.handling_throughputs: Optional[grpc.Future] = None self.handling_cpu_usage: Optional[grpc.Future] = None self.handling_events: Optional[grpc.Future] = None @@ -103,7 +102,8 @@ class CoreClient: @property def client(self) -> client.CoreGrpcClient: if self.session: - if not self._client.check_session(self.session.id): + response = self._client.check_session(self.session.id) + if not response.result: throughputs_enabled = self.handling_throughputs is not None self.cancel_throughputs() self.cancel_events() @@ -154,20 +154,22 @@ class CoreClient: for observer in self.app.guiconfig.observers: self.custom_observers[observer.name] = observer - def handle_events(self, event: Event) -> None: - if not self.session or event.source == GUI_SOURCE: + def handle_events(self, event: core_pb2.Event) -> None: + if event.source == GUI_SOURCE: return if event.session_id != self.session.id: - logger.warning( + logging.warning( "ignoring event session(%s) current(%s)", event.session_id, self.session.id, ) return - if event.link_event: - self.app.after(0, self.handle_link_event, event.link_event) - elif event.session_event: - logger.info("session event: %s", event) + + if event.HasField("link_event"): + link_event = LinkEvent.from_proto(event.link_event) + self.app.after(0, self.handle_link_event, link_event) + elif event.HasField("session_event"): + logging.info("session event: %s", event) session_event = event.session_event if session_event.event <= SessionState.SHUTDOWN.value: self.session.state = SessionState(session_event.event) @@ -182,52 +184,55 @@ class CoreClient: else: dialog.set_pause() else: - logger.warning("unknown session event: %s", session_event) - elif event.node_event: - self.app.after(0, self.handle_node_event, event.node_event) - elif event.config_event: - logger.info("config event: %s", event) - elif event.exception_event: - self.handle_exception_event(event.exception_event) + logging.warning("unknown session event: %s", session_event) + elif event.HasField("node_event"): + node_event = NodeEvent.from_proto(event.node_event) + self.app.after(0, self.handle_node_event, node_event) + elif event.HasField("config_event"): + logging.info("config event: %s", event) + elif event.HasField("exception_event"): + event = ExceptionEvent.from_proto(event.session_id, event.exception_event) + self.handle_exception_event(event) else: - logger.info("unhandled event: %s", event) + logging.info("unhandled event: %s", event) def handle_link_event(self, event: LinkEvent) -> None: - logger.debug("Link event: %s", event) + logging.debug("Link event: %s", event) node1_id = event.link.node1_id node2_id = event.link.node2_id if node1_id == node2_id: - logger.warning("ignoring links with loops: %s", event) + logging.warning("ignoring links with loops: %s", event) return canvas_node1 = self.canvas_nodes[node1_id] canvas_node2 = self.canvas_nodes[node2_id] if event.link.type == LinkType.WIRELESS: if event.message_type == MessageType.ADD: - self.app.manager.add_wireless_edge( + self.app.canvas.add_wireless_edge( canvas_node1, canvas_node2, event.link ) elif event.message_type == MessageType.DELETE: - self.app.manager.delete_wireless_edge( + self.app.canvas.delete_wireless_edge( canvas_node1, canvas_node2, event.link ) elif event.message_type == MessageType.NONE: - self.app.manager.update_wireless_edge( + self.app.canvas.update_wireless_edge( canvas_node1, canvas_node2, event.link ) else: - logger.warning("unknown link event: %s", event) + logging.warning("unknown link event: %s", event) else: if event.message_type == MessageType.ADD: - self.app.manager.add_wired_edge(canvas_node1, canvas_node2, event.link) + self.app.canvas.add_wired_edge(canvas_node1, canvas_node2, event.link) + self.app.canvas.organize() elif event.message_type == MessageType.DELETE: - self.app.manager.delete_wired_edge(event.link) + self.app.canvas.delete_wired_edge(event.link) elif event.message_type == MessageType.NONE: - self.app.manager.update_wired_edge(event.link) + self.app.canvas.update_wired_edge(event.link) else: - logger.warning("unknown link event: %s", event) + logging.warning("unknown link event: %s", event) def handle_node_event(self, event: NodeEvent) -> None: - logger.debug("node event: %s", event) + logging.debug("node event: %s", event) node = event.node if event.message_type == MessageType.NONE: canvas_node = self.canvas_nodes[node.id] @@ -238,25 +243,26 @@ class CoreClient: canvas_node.update_icon(node.icon) elif event.message_type == MessageType.DELETE: canvas_node = self.canvas_nodes[node.id] - canvas_node.canvas_delete() + self.app.canvas.clear_selection() + self.app.canvas.select_object(canvas_node.id) + self.app.canvas.delete_selected_objects() elif event.message_type == MessageType.ADD: if node.id in self.session.nodes: - logger.error("core node already exists: %s", node) - self.app.manager.add_core_node(node) + logging.error("core node already exists: %s", node) + self.app.canvas.add_core_node(node) else: - logger.warning("unknown node event: %s", event) + logging.warning("unknown node event: %s", event) def enable_throughputs(self) -> None: - if not self.handling_throughputs: - self.handling_throughputs = self.client.throughputs( - self.session.id, self.handle_throughputs - ) + self.handling_throughputs = self.client.throughputs( + self.session.id, self.handle_throughputs + ) def cancel_throughputs(self) -> None: if self.handling_throughputs: self.handling_throughputs.cancel() self.handling_throughputs = None - self.app.manager.clear_throughputs() + self.app.canvas.clear_throughputs() def cancel_events(self) -> None: if self.handling_events: @@ -277,40 +283,41 @@ class CoreClient: CPU_USAGE_DELAY, self.handle_cpu_event ) - def handle_throughputs(self, event: ThroughputsEvent) -> None: + def handle_throughputs(self, event: core_pb2.ThroughputsEvent) -> None: + event = ThroughputsEvent.from_proto(event) if event.session_id != self.session.id: - logger.warning( + logging.warning( "ignoring throughput event session(%s) current(%s)", event.session_id, self.session.id, ) return - logger.debug("handling throughputs event: %s", event) - self.app.after(0, self.app.manager.set_throughputs, event) + logging.debug("handling throughputs event: %s", event) + self.app.after(0, self.app.canvas.set_throughputs, event) def handle_cpu_event(self, event: core_pb2.CpuUsageEvent) -> None: self.app.after(0, self.app.statusbar.set_cpu, event.usage) def handle_exception_event(self, event: ExceptionEvent) -> None: - logger.info("exception event: %s", event) + logging.info("exception event: %s", event) self.app.statusbar.add_alert(event) - def update_session_title(self) -> None: - title_file = self.session.file.name if self.session.file else "" - self.master.title(f"CORE Session({self.session.id}) {title_file}") - def join_session(self, session_id: int) -> None: - logger.info("joining session(%s)", session_id) + logging.info("joining session(%s)", session_id) self.reset() try: - self.session = self.client.get_session(session_id) - self.session.user = self.user - self.update_session_title() + response = self.client.get_session(session_id) + self.session = Session.from_proto(response.session) + self.client.set_session_user(self.session.id, self.user) + title_file = self.session.file.name if self.session.file else "" + self.master.title(f"CORE Session({self.session.id}) {title_file}") self.handling_events = self.client.events( self.session.id, self.handle_events ) self.ifaces_manager.joined(self.session.links) - self.app.manager.join(self.session) + self.app.canvas.reset_and_redraw(self.session) + self.parse_metadata() + self.app.canvas.organize() if self.is_runtime(): self.show_mobility_players() self.app.after(0, self.app.joined_session_update) @@ -320,14 +327,79 @@ class CoreClient: def is_runtime(self) -> bool: return self.session and self.session.state == SessionState.RUNTIME + def parse_metadata(self) -> None: + # canvas setting + config = self.session.metadata + canvas_config = config.get("canvas") + logging.debug("canvas metadata: %s", canvas_config) + if canvas_config: + canvas_config = json.loads(canvas_config) + gridlines = canvas_config.get("gridlines", True) + self.app.canvas.show_grid.set(gridlines) + fit_image = canvas_config.get("fit_image", False) + self.app.canvas.adjust_to_dim.set(fit_image) + wallpaper_style = canvas_config.get("wallpaper-style", 1) + self.app.canvas.scale_option.set(wallpaper_style) + width = self.app.guiconfig.preferences.width + height = self.app.guiconfig.preferences.height + dimensions = canvas_config.get("dimensions", [width, height]) + self.app.canvas.redraw_canvas(dimensions) + wallpaper = canvas_config.get("wallpaper") + if wallpaper: + wallpaper = str(appconfig.BACKGROUNDS_PATH.joinpath(wallpaper)) + self.app.canvas.set_wallpaper(wallpaper) + else: + self.app.canvas.redraw_canvas() + self.app.canvas.set_wallpaper(None) + + # load saved shapes + shapes_config = config.get("shapes") + if shapes_config: + shapes_config = json.loads(shapes_config) + for shape_config in shapes_config: + logging.debug("loading shape: %s", shape_config) + shape_type = shape_config["type"] + try: + shape_type = ShapeType(shape_type) + coords = shape_config["iconcoords"] + data = AnnotationData( + shape_config["label"], + shape_config["fontfamily"], + shape_config["fontsize"], + shape_config["labelcolor"], + shape_config["color"], + shape_config["border"], + shape_config["width"], + shape_config["bold"], + shape_config["italic"], + shape_config["underline"], + ) + shape = Shape( + self.app, self.app.canvas, shape_type, *coords, data=data + ) + self.app.canvas.shapes[shape.id] = shape + except ValueError: + logging.exception("unknown shape: %s", shape_type) + + # load edges config + edges_config = config.get("edges") + if edges_config: + edges_config = json.loads(edges_config) + logging.info("edges config: %s", edges_config) + for edge_config in edges_config: + edge = self.links[edge_config["token"]] + edge.width = edge_config["width"] + edge.color = edge_config["color"] + edge.redraw() + def create_new_session(self) -> None: """ Create a new session """ try: - session = self.client.create_session() - logger.info("created session: %s", session.id) - self.join_session(session.id) + response = self.client.create_session() + logging.info("created session: %s", response) + self.join_session(response.session_id) location_config = self.app.guiconfig.location self.session.location = SessionLocation( x=location_config.x, @@ -348,7 +420,7 @@ class CoreClient: session_id = self.session.id try: response = self.client.delete_session(session_id) - logger.info("deleted session(%s), Result: %s", session_id, response) + logging.info("deleted session(%s), Result: %s", session_id, response) except grpc.RpcError as e: self.app.show_grpc_exception("Delete Session Error", e) @@ -358,29 +430,30 @@ class CoreClient: """ try: self.client.connect() - # get current core configurations services/config services - core_config = self.client.get_config() - self.emane_models = sorted(core_config.emane_models) - for service in core_config.services: + # get all available services + response = self.client.get_services() + for service in response.services: group_services = self.services.setdefault(service.group, set()) group_services.add(service.name) - for service in core_config.config_services: - self.config_services[service.name] = service + # get config service informations + response = self.client.get_config_services() + for service in response.services: + self.config_services[service.name] = ConfigService.from_proto(service) group_services = self.config_services_groups.setdefault( service.group, set() ) group_services.add(service.name) # join provided session, create new session, or show dialog to select an # existing session - sessions = self.client.get_sessions() + response = self.client.get_sessions() + sessions = response.sessions if session_id: - session_ids = {x.id for x in sessions} + session_ids = set(x.id for x in sessions) if session_id not in session_ids: - self.app.show_error( - "Join Session Error", - f"{session_id} does not exist", - blocking=True, + dialog = ErrorDialog( + self.app, "Join Session Error", f"{session_id} does not exist" ) + dialog.show() self.app.close() else: self.join_session(session_id) @@ -391,88 +464,119 @@ class CoreClient: dialog = SessionsDialog(self.app, True) dialog.show() except grpc.RpcError as e: - logger.exception("core setup error") - self.app.show_grpc_exception("Setup Error", e, blocking=True) + logging.exception("core setup error") + dialog = ErrorDialog(self.app, "Setup Error", e.details()) + dialog.show() self.app.close() def edit_node(self, core_node: Node) -> None: try: - self.client.move_node( - self.session.id, core_node.id, core_node.position, source=GUI_SOURCE + position = core_node.position.to_proto() + self.client.edit_node( + self.session.id, core_node.id, position, source=GUI_SOURCE ) except grpc.RpcError as e: self.app.show_grpc_exception("Edit Node Error", e) - def get_links(self, definition: bool = False) -> list[Link]: - if not definition: - self.ifaces_manager.set_macs([x.link for x in self.links.values()]) + def send_servers(self) -> None: + for server in self.servers.values(): + self.client.add_session_server(self.session.id, server.name, server.address) + + def start_session(self) -> Tuple[bool, List[str]]: + self.ifaces_manager.reset_mac() + nodes = [x.to_proto() for x in self.session.nodes.values()] links = [] + asymmetric_links = [] for edge in self.links.values(): link = edge.link - if not definition: - node1 = self.session.nodes[link.node1_id] - node2 = self.session.nodes[link.node2_id] - if nutils.is_container(node1) and link.iface1 and not link.iface1.mac: - link.iface1.mac = self.ifaces_manager.next_mac() - if nutils.is_container(node2) and link.iface2 and not link.iface2.mac: - link.iface2.mac = self.ifaces_manager.next_mac() - links.append(link) + if link.iface1 and not link.iface1.mac: + link.iface1.mac = self.ifaces_manager.next_mac() + if link.iface2 and not link.iface2.mac: + link.iface2.mac = self.ifaces_manager.next_mac() + links.append(link.to_proto()) if edge.asymmetric_link: - links.append(edge.asymmetric_link) - return links - - def start_session(self, definition: bool = False) -> tuple[bool, list[str]]: - self.session.links = self.get_links(definition) - self.session.metadata = self.get_metadata() - self.session.servers.clear() - for server in self.servers.values(): - self.session.servers.append(Server(name=server.name, host=server.address)) + asymmetric_links.append(edge.asymmetric_link.to_proto()) + wlan_configs = self.get_wlan_configs_proto() + mobility_configs = self.get_mobility_configs_proto() + emane_model_configs = self.get_emane_model_configs_proto() + hooks = [x.to_proto() for x in self.session.hooks.values()] + service_configs = self.get_service_configs_proto() + file_configs = self.get_service_file_configs_proto() + config_service_configs = self.get_config_service_configs_proto() + emane_config = to_dict(self.session.emane_config) result = False exceptions = [] try: - result, exceptions = self.client.start_session(self.session, definition) - logger.info( - "start session(%s) definition(%s), result: %s", + self.send_servers() + response = self.client.start_session( self.session.id, - definition, - result, + nodes, + links, + self.session.location.to_proto(), + hooks, + emane_config, + emane_model_configs, + wlan_configs, + mobility_configs, + service_configs, + file_configs, + asymmetric_links, + config_service_configs, ) - if self.show_throughputs.get(): - self.enable_throughputs() + logging.info( + "start session(%s), result: %s", self.session.id, response.result + ) + if response.result: + self.set_metadata() + result = response.result + exceptions = response.exceptions except grpc.RpcError as e: self.app.show_grpc_exception("Start Session Error", e) return result, exceptions def stop_session(self, session_id: int = None) -> bool: - session_id = session_id or self.session.id - self.cancel_throughputs() + if not session_id: + session_id = self.session.id result = False try: - result = self.client.stop_session(session_id) - logger.info("stopped session(%s), result: %s", session_id, result) + response = self.client.stop_session(session_id) + logging.info("stopped session(%s), result: %s", session_id, response) + result = response.result except grpc.RpcError as e: self.app.show_grpc_exception("Stop Session Error", e) return result def show_mobility_players(self) -> None: for node in self.session.nodes.values(): - if not nutils.is_mobility(node): + if not NodeUtils.is_mobility(node): continue if node.mobility_config: mobility_player = MobilityPlayer(self.app, node) self.mobility_players[node.id] = mobility_player mobility_player.show() - def get_metadata(self) -> dict[str, str]: + def set_metadata(self) -> None: # create canvas data - canvas_config = self.app.manager.get_metadata() + wallpaper_path = None + if self.app.canvas.wallpaper_file: + wallpaper = Path(self.app.canvas.wallpaper_file) + if BACKGROUNDS_PATH == wallpaper.parent: + wallpaper_path = wallpaper.name + else: + wallpaper_path = str(wallpaper) + canvas_config = { + "wallpaper": wallpaper_path, + "wallpaper-style": self.app.canvas.scale_option.get(), + "gridlines": self.app.canvas.show_grid.get(), + "fit_image": self.app.canvas.adjust_to_dim.get(), + "dimensions": self.app.canvas.current_dimensions, + } canvas_config = json.dumps(canvas_config) # create shapes data shapes = [] - for canvas in self.app.manager.all(): - for shape in canvas.shapes.values(): - shapes.append(shape.metadata()) + for shape in self.app.canvas.shapes.values(): + shapes.append(shape.metadata()) shapes = json.dumps(shapes) # create edges config @@ -484,14 +588,10 @@ class CoreClient: edges_config.append(edge_config) edges_config = json.dumps(edges_config) - # create hidden metadata - hidden = [x.core_node.id for x in self.canvas_nodes.values() if x.hidden] - hidden = json.dumps(hidden) - # save metadata - return dict( - canvas=canvas_config, shapes=shapes, edges=edges_config, hidden=hidden - ) + metadata = dict(canvas=canvas_config, shapes=shapes, edges=edges_config) + response = self.client.set_session_metadata(self.session.id, metadata) + logging.debug("set session metadata %s, result: %s", metadata, response) def launch_terminal(self, node_id: int) -> None: try: @@ -503,9 +603,9 @@ class CoreClient: parent=self.app, ) return - node_term = self.client.get_node_terminal(self.session.id, node_id) - cmd = f"{terminal} {node_term} &" - logger.info("launching terminal %s", cmd) + response = self.client.get_node_terminal(self.session.id, node_id) + cmd = f"{terminal} {response.terminal} &" + logging.info("launching terminal %s", cmd) os.system(cmd) except grpc.RpcError as e: self.app.show_grpc_exception("Node Terminal Error", e) @@ -513,82 +613,187 @@ class CoreClient: def get_xml_dir(self) -> str: return str(self.session.file.parent) if self.session.file else str(XMLS_PATH) - def save_xml(self, file_path: Path = None) -> bool: + def save_xml(self, file_path: str = None) -> None: """ Save core session as to an xml file """ if not file_path and not self.session.file: - logger.error("trying to save xml for session with no file") - return False + logging.error("trying to save xml for session with no file") + return if not file_path: - file_path = self.session.file - result = False + file_path = str(self.session.file) try: if not self.is_runtime(): - logger.debug("sending session data to the daemon") - result, exceptions = self.start_session(definition=True) - if not result: - message = "\n".join(exceptions) - self.app.show_exception_data( - "Session Definition Exception", - "Failed to define session", - message, - ) - self.client.save_xml(self.session.id, str(file_path)) - if self.session.file != file_path: - self.session.file = file_path - self.update_session_title() - logger.info("saved xml file %s", file_path) - result = True + logging.debug("Send session data to the daemon") + self.send_data() + response = self.client.save_xml(self.session.id, file_path) + logging.info("saved xml file %s, result: %s", file_path, response) except grpc.RpcError as e: self.app.show_grpc_exception("Save XML Error", e) - return result - def open_xml(self, file_path: Path) -> None: + def open_xml(self, file_path: str) -> None: """ Open core xml """ try: - result, session_id = self._client.open_xml(file_path) - logger.info( - "open xml file %s, result(%s) session(%s)", - file_path, - result, - session_id, - ) - self.join_session(session_id) + response = self._client.open_xml(file_path) + logging.info("open xml file %s, response: %s", file_path, response) + self.join_session(response.session_id) except grpc.RpcError as e: self.app.show_grpc_exception("Open XML Error", e) def get_node_service(self, node_id: int, service_name: str) -> NodeServiceData: - node_service = self.client.get_node_service( - self.session.id, node_id, service_name + response = self.client.get_node_service(self.session.id, node_id, service_name) + logging.debug( + "get node(%s) %s service, response: %s", node_id, service_name, response ) - logger.debug( - "get node(%s) service(%s): %s", node_id, service_name, node_service + return NodeServiceData.from_proto(response.service) + + def set_node_service( + self, + node_id: int, + service_name: str, + dirs: List[str], + files: List[str], + startups: List[str], + validations: List[str], + shutdowns: List[str], + ) -> NodeServiceData: + response = self.client.set_node_service( + self.session.id, + node_id, + service_name, + directories=dirs, + files=files, + startup=startups, + validate=validations, + shutdown=shutdowns, ) - return node_service + logging.info( + "Set %s service for node(%s), files: %s, Startup: %s, " + "Validation: %s, Shutdown: %s, Result: %s", + service_name, + node_id, + files, + startups, + validations, + shutdowns, + response, + ) + response = self.client.get_node_service(self.session.id, node_id, service_name) + return NodeServiceData.from_proto(response.service) def get_node_service_file( self, node_id: int, service_name: str, file_name: str ) -> str: - data = self.client.get_node_service_file( + response = self.client.get_node_service_file( self.session.id, node_id, service_name, file_name ) - logger.debug( - "get service file for node(%s), service: %s, file: %s, data: %s", + logging.debug( + "get service file for node(%s), service: %s, file: %s, result: %s", + node_id, + service_name, + file_name, + response, + ) + return response.data + + def set_node_service_file( + self, node_id: int, service_name: str, file_name: str, data: str + ) -> None: + response = self.client.set_node_service_file( + self.session.id, node_id, service_name, file_name, data + ) + logging.info( + "set node(%s) service file, service: %s, file: %s, data: %s, result: %s", node_id, service_name, file_name, data, + response, ) - return data + + def create_nodes_and_links(self) -> None: + """ + create nodes and links that have not been created yet + """ + self.client.set_session_state(self.session.id, SessionState.DEFINITION.value) + for node in self.session.nodes.values(): + response = self.client.add_node( + self.session.id, node.to_proto(), source=GUI_SOURCE + ) + logging.debug("created node: %s", response) + asymmetric_links = [] + for edge in self.links.values(): + self.add_link(edge.link) + if edge.asymmetric_link: + asymmetric_links.append(edge.asymmetric_link) + for link in asymmetric_links: + self.add_link(link) + + def send_data(self) -> None: + """ + Send to daemon all session info, but don't start the session + """ + self.send_servers() + self.create_nodes_and_links() + for config_proto in self.get_wlan_configs_proto(): + self.client.set_wlan_config( + self.session.id, config_proto.node_id, config_proto.config + ) + for config_proto in self.get_mobility_configs_proto(): + self.client.set_mobility_config( + self.session.id, config_proto.node_id, config_proto.config + ) + for config_proto in self.get_service_configs_proto(): + self.client.set_node_service( + self.session.id, + config_proto.node_id, + config_proto.service, + startup=config_proto.startup, + validate=config_proto.validate, + shutdown=config_proto.shutdown, + ) + for config_proto in self.get_service_file_configs_proto(): + self.client.set_node_service_file( + self.session.id, + config_proto.node_id, + config_proto.service, + config_proto.file, + config_proto.data, + ) + for hook in self.session.hooks.values(): + self.client.add_hook( + self.session.id, hook.state.value, hook.file, hook.data + ) + for config_proto in self.get_emane_model_configs_proto(): + self.client.set_emane_model_config( + self.session.id, + config_proto.node_id, + config_proto.model, + config_proto.config, + config_proto.iface_id, + ) + config = to_dict(self.session.emane_config) + self.client.set_emane_config(self.session.id, config) + location = self.session.location + self.client.set_session_location( + self.session.id, + location.x, + location.y, + location.z, + location.lat, + location.lon, + location.alt, + location.scale, + ) + self.set_metadata() def close(self) -> None: """ Clean ups when done using grpc """ - logger.debug("close grpc") + logging.debug("close grpc") self.client.close() def next_node_id(self) -> int: @@ -611,15 +816,15 @@ class CoreClient: node_id = self.next_node_id() position = Position(x=x, y=y) image = None - if nutils.has_image(node_type): + if NodeUtils.is_image_node(node_type): image = "ubuntu:latest" emane = None if node_type == NodeType.EMANE: - if not self.emane_models: + if not self.session.emane_models: dialog = EmaneInstallDialog(self.app) dialog.show() return - emane = self.emane_models[0] + emane = self.session.emane_models[0] name = f"emane{node_id}" elif node_type == NodeType.WIRELESS_LAN: name = f"wlan{node_id}" @@ -636,15 +841,15 @@ class CoreClient: image=image, emane=emane, ) - if nutils.is_custom(node): - services = nutils.get_custom_services(self.app.guiconfig, model) - node.config_services = set(services) + if NodeUtils.is_custom(node_type, model): + services = NodeUtils.get_custom_node_services(self.app.guiconfig, model) + node.services[:] = services # assign default services to CORE node else: services = self.session.default_services.get(model) if services: - node.config_services = set(services) - logger.info( + node.services = services.copy() + logging.info( "add node(%s) to session(%s), coordinates(%s, %s)", node.name, self.session.id, @@ -654,7 +859,7 @@ class CoreClient: self.session.nodes[node.id] = node return node - def deleted_canvas_nodes(self, canvas_nodes: list[CanvasNode]) -> None: + def deleted_canvas_nodes(self, canvas_nodes: List[CanvasNode]) -> None: """ remove the nodes selected by the user and anything related to that node such as link, configurations, interfaces @@ -671,18 +876,64 @@ class CoreClient: links.append(edge.link) self.ifaces_manager.removed(links) - def save_edge(self, edge: CanvasEdge) -> None: + def create_iface(self, canvas_node: CanvasNode) -> Interface: + node = canvas_node.core_node + ip4, ip6 = self.ifaces_manager.get_ips(node) + ip4_mask = self.ifaces_manager.ip4_mask + ip6_mask = self.ifaces_manager.ip6_mask + iface_id = canvas_node.next_iface_id() + name = f"eth{iface_id}" + iface = Interface( + id=iface_id, + name=name, + ip4=ip4, + ip4_mask=ip4_mask, + ip6=ip6, + ip6_mask=ip6_mask, + ) + logging.info("create node(%s) interface(%s)", node.name, iface) + return iface + + def create_link( + self, edge: CanvasEdge, canvas_src_node: CanvasNode, canvas_dst_node: CanvasNode + ) -> Link: + """ + Create core link for a pair of canvas nodes, with token referencing + the canvas edge. + """ + src_node = canvas_src_node.core_node + dst_node = canvas_dst_node.core_node + self.ifaces_manager.determine_subnets(canvas_src_node, canvas_dst_node) + src_iface = None + if NodeUtils.is_container_node(src_node.type): + src_iface = self.create_iface(canvas_src_node) + dst_iface = None + if NodeUtils.is_container_node(dst_node.type): + dst_iface = self.create_iface(canvas_dst_node) + link = Link( + type=LinkType.WIRED, + node1_id=src_node.id, + node2_id=dst_node.id, + iface1=src_iface, + iface2=dst_iface, + ) + logging.info("added link between %s and %s", src_node.name, dst_node.name) + return link + + def save_edge( + self, edge: CanvasEdge, canvas_src_node: CanvasNode, canvas_dst_node: CanvasNode + ) -> None: self.links[edge.token] = edge - src_node = edge.src.core_node - dst_node = edge.dst.core_node - if edge.link.iface1: + src_node = canvas_src_node.core_node + dst_node = canvas_dst_node.core_node + if NodeUtils.is_container_node(src_node.type): src_iface_id = edge.link.iface1.id self.iface_to_edge[(src_node.id, src_iface_id)] = edge - if edge.link.iface2: + if NodeUtils.is_container_node(dst_node.type): dst_iface_id = edge.link.iface2.id self.iface_to_edge[(dst_node.id, dst_iface_id)] = edge - def get_wlan_configs(self) -> list[tuple[int, dict[str, str]]]: + def get_wlan_configs_proto(self) -> List[wlan_pb2.WlanConfig]: configs = [] for node in self.session.nodes.values(): if node.type != NodeType.WIRELESS_LAN: @@ -690,81 +941,79 @@ class CoreClient: if not node.wlan_config: continue config = ConfigOption.to_dict(node.wlan_config) - configs.append((node.id, config)) + wlan_config = wlan_pb2.WlanConfig(node_id=node.id, config=config) + configs.append(wlan_config) return configs - def get_mobility_configs(self) -> list[tuple[int, dict[str, str]]]: + def get_mobility_configs_proto(self) -> List[mobility_pb2.MobilityConfig]: configs = [] for node in self.session.nodes.values(): - if not nutils.is_mobility(node): + if not NodeUtils.is_mobility(node): continue if not node.mobility_config: continue config = ConfigOption.to_dict(node.mobility_config) - configs.append((node.id, config)) + mobility_config = mobility_pb2.MobilityConfig( + node_id=node.id, config=config + ) + configs.append(mobility_config) return configs - def get_emane_model_configs(self) -> list[EmaneModelConfig]: + def get_emane_model_configs_proto(self) -> List[emane_pb2.EmaneModelConfig]: configs = [] for node in self.session.nodes.values(): for key, config in node.emane_model_configs.items(): model, iface_id = key - # config = ConfigOption.to_dict(config) + config = ConfigOption.to_dict(config) if iface_id is None: iface_id = -1 - config = EmaneModelConfig( - node_id=node.id, model=model, iface_id=iface_id, config=config + config_proto = emane_pb2.EmaneModelConfig( + node_id=node.id, iface_id=iface_id, model=model, config=config ) - configs.append(config) + configs.append(config_proto) return configs - def get_service_configs(self) -> list[ServiceConfig]: + def get_service_configs_proto(self) -> List[services_pb2.ServiceConfig]: configs = [] for node in self.session.nodes.values(): - if not nutils.is_container(node): + if not NodeUtils.is_container_node(node.type): continue if not node.service_configs: continue for name, config in node.service_configs.items(): - config = ServiceConfig( + config_proto = services_pb2.ServiceConfig( node_id=node.id, service=name, - files=config.configs, directories=config.dirs, + files=config.configs, startup=config.startup, validate=config.validate, shutdown=config.shutdown, ) - configs.append(config) + configs.append(config_proto) return configs - def get_service_file_configs(self) -> list[ServiceFileConfig]: + def get_service_file_configs_proto(self) -> List[services_pb2.ServiceFileConfig]: configs = [] for node in self.session.nodes.values(): - if not nutils.is_container(node): + if not NodeUtils.is_container_node(node.type): continue if not node.service_file_configs: continue for service, file_configs in node.service_file_configs.items(): for file, data in file_configs.items(): - config = ServiceFileConfig(node.id, service, file, data) - configs.append(config) + config_proto = services_pb2.ServiceFileConfig( + node_id=node.id, service=service, file=file, data=data + ) + configs.append(config_proto) return configs - def get_config_service_rendered(self, node_id: int, name: str) -> dict[str, str]: - return self.client.get_config_service_rendered(self.session.id, node_id, name) - - def get_config_service_defaults( - self, node_id: int, name: str - ) -> ConfigServiceDefaults: - return self.client.get_config_service_defaults(self.session.id, node_id, name) - def get_config_service_configs_proto( - self, - ) -> list[configservices_pb2.ConfigServiceConfig]: + self + ) -> List[configservices_pb2.ConfigServiceConfig]: config_service_protos = [] for node in self.session.nodes.values(): - if not nutils.is_container(node): + if not NodeUtils.is_container_node(node.type): continue if not node.config_service_configs: continue @@ -779,40 +1028,39 @@ class CoreClient: return config_service_protos def run(self, node_id: int) -> str: - logger.info("running node(%s) cmd: %s", node_id, self.observer) - _, output = self.client.node_command(self.session.id, node_id, self.observer) - return output + logging.info("running node(%s) cmd: %s", node_id, self.observer) + return self.client.node_command(self.session.id, node_id, self.observer).output - def get_wlan_config(self, node_id: int) -> dict[str, ConfigOption]: - config = self.client.get_wlan_config(self.session.id, node_id) - logger.debug( + def get_wlan_config(self, node_id: int) -> Dict[str, ConfigOption]: + response = self.client.get_wlan_config(self.session.id, node_id) + config = response.config + logging.debug( "get wlan configuration from node %s, result configuration: %s", node_id, config, ) - return config + return ConfigOption.from_dict(config) - def get_wireless_config(self, node_id: int) -> dict[str, ConfigOption]: - return self.client.get_wireless_config(self.session.id, node_id) - - def get_mobility_config(self, node_id: int) -> dict[str, ConfigOption]: - config = self.client.get_mobility_config(self.session.id, node_id) - logger.debug( + def get_mobility_config(self, node_id: int) -> Dict[str, ConfigOption]: + response = self.client.get_mobility_config(self.session.id, node_id) + config = response.config + logging.debug( "get mobility config from node %s, result configuration: %s", node_id, config, ) - return config + return ConfigOption.from_dict(config) def get_emane_model_config( self, node_id: int, model: str, iface_id: int = None - ) -> dict[str, ConfigOption]: + ) -> Dict[str, ConfigOption]: if iface_id is None: iface_id = -1 - config = self.client.get_emane_model_config( + response = self.client.get_emane_model_config( self.session.id, node_id, model, iface_id ) - logger.debug( + config = response.config + logging.debug( "get emane model config: node id: %s, EMANE model: %s, " "interface: %s, config: %s", node_id, @@ -820,21 +1068,42 @@ class CoreClient: iface_id, config, ) - return config + return ConfigOption.from_dict(config) - def execute_script(self, script: str, options: str) -> None: - session_id = self.client.execute_script(script, options) - logger.info("execute python script %s", session_id) - if session_id != -1: - self.join_session(session_id) + def execute_script(self, script) -> None: + response = self.client.execute_script(script) + logging.info("execute python script %s", response) + if response.session_id != -1: + self.join_session(response.session_id) def add_link(self, link: Link) -> None: - result, _, _ = self.client.add_link(self.session.id, link, source=GUI_SOURCE) - logger.debug("added link: %s", result) - if not result: - logger.error("error adding link: %s", link) + iface1 = link.iface1.to_proto() if link.iface1 else None + iface2 = link.iface2.to_proto() if link.iface2 else None + options = link.options.to_proto() if link.options else None + response = self.client.add_link( + self.session.id, + link.node1_id, + link.node2_id, + iface1, + iface2, + options, + source=GUI_SOURCE, + ) + logging.debug("added link: %s", response) + if not response.result: + logging.error("error adding link: %s", link) def edit_link(self, link: Link) -> None: - result = self.client.edit_link(self.session.id, link, source=GUI_SOURCE) - if not result: - logger.error("error editing link: %s", link) + iface1_id = link.iface1.id if link.iface1 else None + iface2_id = link.iface2.id if link.iface2 else None + response = self.client.edit_link( + self.session.id, + link.node1_id, + link.node2_id, + link.options.to_proto(), + iface1_id, + iface2_id, + source=GUI_SOURCE, + ) + if not response.result: + logging.error("error editing link: %s", link) diff --git a/daemon/core/gui/data/icons/antenna.gif b/daemon/core/gui/data/icons/antenna.gif new file mode 100644 index 00000000..55814324 Binary files /dev/null and b/daemon/core/gui/data/icons/antenna.gif differ diff --git a/daemon/core/gui/data/icons/antenna.png b/daemon/core/gui/data/icons/antenna.png deleted file mode 100644 index 4247aa3d..00000000 Binary files a/daemon/core/gui/data/icons/antenna.png and /dev/null differ diff --git a/daemon/core/gui/data/icons/podman.png b/daemon/core/gui/data/icons/podman.png deleted file mode 100644 index 771e04a0..00000000 Binary files a/daemon/core/gui/data/icons/podman.png and /dev/null differ diff --git a/daemon/core/gui/data/icons/shadow.png b/daemon/core/gui/data/icons/shadow.png deleted file mode 100644 index 6d6f3571..00000000 Binary files a/daemon/core/gui/data/icons/shadow.png and /dev/null differ diff --git a/daemon/core/gui/data/icons/wireless.png b/daemon/core/gui/data/icons/wireless.png deleted file mode 100644 index 2b42b8dd..00000000 Binary files a/daemon/core/gui/data/icons/wireless.png and /dev/null differ diff --git a/daemon/core/gui/data/xmls/emane-demo-antenna.xml b/daemon/core/gui/data/xmls/emane-demo-antenna.xml index 00616339..935fd97a 100644 --- a/daemon/core/gui/data/xmls/emane-demo-antenna.xml +++ b/daemon/core/gui/data/xmls/emane-demo-antenna.xml @@ -1,373 +1,88 @@ - + - - + + - - + + - + - - + + - + - - + + - + - - + + - + - - + + - - + + - - + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -382,17 +97,6 @@ - - - - - - - - - - - @@ -409,10 +113,179 @@ - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -425,7 +298,7 @@ - + @@ -438,19 +311,10 @@ - - - - - - - - - - - + + diff --git a/daemon/core/gui/data/xmls/emane-demo-eel.xml b/daemon/core/gui/data/xmls/emane-demo-eel.xml index 66f8f2a8..4162458c 100644 --- a/daemon/core/gui/data/xmls/emane-demo-eel.xml +++ b/daemon/core/gui/data/xmls/emane-demo-eel.xml @@ -1,55 +1,66 @@ - + - - + + - - + + - + - - + + - + - - + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - @@ -64,17 +75,6 @@ - - - - - - - - - - - @@ -91,10 +91,7 @@ - - - @@ -107,7 +104,7 @@ - + @@ -120,19 +117,10 @@ - - - - - - - - - - - + + diff --git a/daemon/core/gui/data/xmls/emane-demo-files.xml b/daemon/core/gui/data/xmls/emane-demo-files.xml index 9e71d58f..da6f9c70 100644 --- a/daemon/core/gui/data/xmls/emane-demo-files.xml +++ b/daemon/core/gui/data/xmls/emane-demo-files.xml @@ -1,55 +1,66 @@ - + - - + + - - + + - + - - + + - + - - + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - @@ -64,17 +75,6 @@ - - - - - - - - - - - @@ -91,10 +91,7 @@ - - - @@ -107,7 +104,7 @@ - + @@ -120,19 +117,10 @@ - - - - - - - - - - - + + diff --git a/daemon/core/gui/data/xmls/emane-demo-gpsd.xml b/daemon/core/gui/data/xmls/emane-demo-gpsd.xml index 2dbc1294..06bc54dc 100644 --- a/daemon/core/gui/data/xmls/emane-demo-gpsd.xml +++ b/daemon/core/gui/data/xmls/emane-demo-gpsd.xml @@ -1,55 +1,66 @@ - + - - + + - - + + - + - - + + - + - - + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - @@ -64,17 +75,6 @@ - - - - - - - - - - - @@ -91,10 +91,7 @@ - - - @@ -107,7 +104,7 @@ - + @@ -120,19 +117,10 @@ - - - - - - - - - - - + + diff --git a/daemon/core/gui/data/xmls/emane-demo-precomputed.xml b/daemon/core/gui/data/xmls/emane-demo-precomputed.xml index d53e26ba..a19acba6 100644 --- a/daemon/core/gui/data/xmls/emane-demo-precomputed.xml +++ b/daemon/core/gui/data/xmls/emane-demo-precomputed.xml @@ -1,55 +1,66 @@ - + - - + + - - + + - + - - + + - + - - + + - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - @@ -64,17 +75,6 @@ - - - - - - - - - - - @@ -91,10 +91,7 @@ - - - @@ -107,7 +104,7 @@ - + @@ -120,19 +117,10 @@ - - - - - - - - - - - + + diff --git a/daemon/core/gui/data/xmls/sample1.xml b/daemon/core/gui/data/xmls/sample1.xml index 64d093a8..5055c225 100644 --- a/daemon/core/gui/data/xmls/sample1.xml +++ b/daemon/core/gui/data/xmls/sample1.xml @@ -1,16 +1,16 @@ - + - - + + - - + + - - + + @@ -18,8 +18,8 @@ - - + + @@ -27,8 +27,8 @@ - - + + @@ -36,32 +36,32 @@ - - + + - - + + - - + + - - + + @@ -69,40 +69,41 @@ - - + + - - + + - - + + - - + + - - + + + - - + + @@ -112,65 +113,70 @@ - - - + + + - - - + + + - - - + + + - - - + + + - - - + + + - - + + + - - + + + - - + + + - - + + + - - + + + - - - - + + + + - - - - + + + + - - - - + + + + - - - - + + + + - - - - + + + + @@ -180,7 +186,6 @@ - @@ -193,35 +198,1635 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.3.2/24 + ipv6 address a:3::2/64 +! +interface eth1 + ip address 10.0.5.1/24 + ipv6 address a:5::1/64 +! +router ospf + router-id 10.0.3.2 + network 10.0.3.0/24 area 0 + network 10.0.5.0/24 area 0 +! +router ospf6 + router-id 10.0.3.2 + interface eth0 area 0.0.0.0 + interface eth1 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospfd + + + killall ospfd + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth1.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth1.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth1.rp_filter=0 + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.1.1/24 + ipv6 address a:1::1/64 +! +interface eth1 + ip address 10.0.2.1/24 + ipv6 address a:2::1/64 +! +router ospf + router-id 10.0.1.1 + network 10.0.1.0/24 area 0 + network 10.0.2.0/24 area 0 +! +router ospf6 + router-id 10.0.1.1 + interface eth0 area 0.0.0.0 + interface eth1 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospfd + + + killall ospfd + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth1.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth1.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth1.rp_filter=0 + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.2.2/24 + ipv6 address a:2::2/64 +! +interface eth1 + ip address 10.0.3.1/24 + ipv6 address a:3::1/64 +! +interface eth2 + ip address 10.0.4.1/24 + ipv6 address a:4::1/64 +! +router ospf + router-id 10.0.2.2 + network 10.0.2.0/24 area 0 + network 10.0.3.0/24 area 0 + network 10.0.4.0/24 area 0 +! +router ospf6 + router-id 10.0.2.2 + interface eth0 area 0.0.0.0 + interface eth1 area 0.0.0.0 + interface eth2 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospfd + + + killall ospfd + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth1.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth1.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth1.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth2.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth2.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth2.rp_filter=0 + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.0.9/32 + ipv6 address a::9/128 + ipv6 ospf6 instance-id 65 + ipv6 ospf6 hello-interval 2 + ipv6 ospf6 dead-interval 6 + ipv6 ospf6 retransmit-interval 5 + ipv6 ospf6 network manet-designated-router + ipv6 ospf6 diffhellos + ipv6 ospf6 adjacencyconnectivity uniconnected + ipv6 ospf6 lsafullness mincostlsa +! +router ospf6 + router-id 10.0.0.9 + interface eth0 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.0.6/32 + ipv6 address a::6/128 + ipv6 ospf6 instance-id 65 + ipv6 ospf6 hello-interval 2 + ipv6 ospf6 dead-interval 6 + ipv6 ospf6 retransmit-interval 5 + ipv6 ospf6 network manet-designated-router + ipv6 ospf6 diffhellos + ipv6 ospf6 adjacencyconnectivity uniconnected + ipv6 ospf6 lsafullness mincostlsa +! +router ospf6 + router-id 10.0.0.6 + interface eth0 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.0.7/32 + ipv6 address a::7/128 + ipv6 ospf6 instance-id 65 + ipv6 ospf6 hello-interval 2 + ipv6 ospf6 dead-interval 6 + ipv6 ospf6 retransmit-interval 5 + ipv6 ospf6 network manet-designated-router + ipv6 ospf6 diffhellos + ipv6 ospf6 adjacencyconnectivity uniconnected + ipv6 ospf6 lsafullness mincostlsa +! +router ospf6 + router-id 10.0.0.7 + interface eth0 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.0.5/32 + ipv6 address a::3/128 + ipv6 ospf6 instance-id 65 + ipv6 ospf6 hello-interval 2 + ipv6 ospf6 dead-interval 6 + ipv6 ospf6 retransmit-interval 5 + ipv6 ospf6 network manet-designated-router + ipv6 ospf6 diffhellos + ipv6 ospf6 adjacencyconnectivity uniconnected + ipv6 ospf6 lsafullness mincostlsa +! +interface eth1 + ip address 10.0.6.2/24 + !ip ospf hello-interval 2 + !ip ospf dead-interval 6 + !ip ospf retransmit-interval 5 + !ip ospf network point-to-point + ipv6 address a:6::2/64 +! +router ospf + router-id 10.0.0.5 + network 10.0.0.5/32 area 0 + network 10.0.6.0/24 area 0 + redistribute connected metric-type 1 + redistribute ospf6 metric-type 1 +! +router ospf6 + router-id 10.0.0.5 + interface eth0 area 0.0.0.0 + redistribute connected + redistribute ospf +! + + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospfd + + + killall ospfd + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth1.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth1.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth1.rp_filter=0 + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.0.8/32 + ipv6 address a::8/128 + ipv6 ospf6 instance-id 65 + ipv6 ospf6 hello-interval 2 + ipv6 ospf6 dead-interval 6 + ipv6 ospf6 retransmit-interval 5 + ipv6 ospf6 network manet-designated-router + ipv6 ospf6 diffhellos + ipv6 ospf6 adjacencyconnectivity uniconnected + ipv6 ospf6 lsafullness mincostlsa +! +router ospf6 + router-id 10.0.0.8 + interface eth0 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 + + + + + + sh defaultroute.sh + + + #!/bin/sh +# auto-generated by DefaultRoute service (utility.py) +ip route add default via 10.0.1.1 +ip route add default via a:1::1 + + + + + + sh defaultroute.sh + + + #!/bin/sh +# auto-generated by DefaultRoute service (utility.py) +ip route add default via 10.0.1.1 +ip route add default via a:1::1 + + + + + + sh defaultroute.sh + + + #!/bin/sh +# auto-generated by DefaultRoute service (utility.py) +ip route add default via 10.0.1.1 +ip route add default via a:1::1 + + + + + + sh defaultroute.sh + + + #!/bin/sh +# auto-generated by DefaultRoute service (utility.py) +ip route add default via 10.0.1.1 +ip route add default via a:1::1 + + + + + + /etc/ssh + /var/run/sshd + + + sh startsshd.sh + + + killall sshd + + + #!/bin/sh +# auto-generated by SSH service (utility.py) +ssh-keygen -q -t rsa -N "" -f /etc/ssh/ssh_host_rsa_key +chmod 655 /var/run/sshd +# wait until RSA host key has been generated to launch sshd +/usr/sbin/sshd -f /etc/ssh/sshd_config + + # auto-generated by SSH service (utility.py) +Port 22 +Protocol 2 +HostKey /etc/ssh/ssh_host_rsa_key +UsePrivilegeSeparation yes +PidFile /var/run/sshd/sshd.pid + +KeyRegenerationInterval 3600 +ServerKeyBits 768 + +SyslogFacility AUTH +LogLevel INFO + +LoginGraceTime 120 +PermitRootLogin yes +StrictModes yes + +RSAAuthentication yes +PubkeyAuthentication yes + +IgnoreRhosts yes +RhostsRSAAuthentication no +HostbasedAuthentication no + +PermitEmptyPasswords no +ChallengeResponseAuthentication no + +X11Forwarding yes +X11DisplayOffset 10 +PrintMotd no +PrintLastLog yes +TCPKeepAlive yes + +AcceptEnv LANG LC_* +Subsystem sftp /usr/lib/openssh/sftp-server +UsePAM yes +UseDNS no + + + + + + /usr/local/etc/quagga + /var/run/quagga + + + sh quaggaboot.sh zebra + + + pidof zebra + + + killall zebra + + + interface eth0 + ip address 10.0.4.2/24 + ipv6 address a:4::2/64 +! +interface eth1 + ip address 10.0.5.2/24 + ipv6 address a:5::2/64 +! +interface eth2 + ip address 10.0.6.1/24 + ipv6 address a:6::1/64 +! +router ospf + router-id 10.0.4.2 + network 10.0.4.0/24 area 0 + network 10.0.5.0/24 area 0 + network 10.0.6.0/24 area 0 +! +router ospf6 + router-id 10.0.4.2 + interface eth0 area 0.0.0.0 + interface eth1 area 0.0.0.0 + interface eth2 area 0.0.0.0 +! + + #!/bin/sh +# auto-generated by zebra service (quagga.py) +QUAGGA_CONF=/usr/local/etc/quagga/Quagga.conf +QUAGGA_SBIN_SEARCH="/usr/local/sbin /usr/sbin /usr/lib/quagga" +QUAGGA_BIN_SEARCH="/usr/local/bin /usr/bin /usr/lib/quagga" +QUAGGA_STATE_DIR=/var/run/quagga + +searchforprog() +{ + prog=$1 + searchpath=$@ + ret= + for p in $searchpath; do + if [ -x $p/$prog ]; then + ret=$p + break + fi + done + echo $ret +} + +confcheck() +{ + CONF_DIR=`dirname $QUAGGA_CONF` + # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then + ln -s $CONF_DIR/Quagga.conf /etc/quagga/Quagga.conf + fi + # if /etc/quagga exists, point /etc/quagga/vtysh.conf -> CONF_DIR + if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then + ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf + fi +} + +bootdaemon() +{ + QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) + if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then + echo "ERROR: Quagga's '$1' daemon not found in search path:" + echo " $QUAGGA_SBIN_SEARCH" + return 1 + fi + + flags="" + + if [ "$1" = "xpimd" ] && \ + grep -E -q '^[[:space:]]*router[[:space:]]+pim6[[:space:]]*$' $QUAGGA_CONF; then + flags="$flags -6" + fi + + $QUAGGA_SBIN_DIR/$1 $flags -d + if [ "$?" != "0" ]; then + echo "ERROR: Quagga's '$1' daemon failed to start!:" + return 1 + fi +} + +bootquagga() +{ + QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) + if [ "z$QUAGGA_BIN_DIR" = "z" ]; then + echo "ERROR: Quagga's 'vtysh' program not found in search path:" + echo " $QUAGGA_BIN_SEARCH" + return 1 + fi + + # fix /var/run/quagga permissions + id -u quagga 2>/dev/null >/dev/null + if [ "$?" = "0" ]; then + chown quagga $QUAGGA_STATE_DIR + fi + + bootdaemon "zebra" + for r in rip ripng ospf6 ospf bgp babel; do + if grep -q "^router \<${r}\>" $QUAGGA_CONF; then + bootdaemon "${r}d" + fi + done + + if grep -E -q '^[[:space:]]*router[[:space:]]+pim6?[[:space:]]*$' $QUAGGA_CONF; then + bootdaemon "xpimd" + fi + + $QUAGGA_BIN_DIR/vtysh -b +} + +if [ "$1" != "zebra" ]; then + echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" + exit 1 +fi +confcheck +bootquagga + + service integrated-vtysh-config + + + + + + pidof ospfd + + + killall ospfd + + + + + pidof ospf6d + + + killall ospf6d + + + + + sh ipforward.sh + + + #!/bin/sh +# auto-generated by IPForward service (utility.py) +/sbin/sysctl -w net.ipv4.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.all.forwarding=1 +/sbin/sysctl -w net.ipv6.conf.default.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.all.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.default.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.all.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.default.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth0.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth0.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth0.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth1.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth1.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth1.rp_filter=0 +/sbin/sysctl -w net.ipv4.conf.eth2.forwarding=1 +/sbin/sysctl -w net.ipv4.conf.eth2.send_redirects=0 +/sbin/sysctl -w net.ipv4.conf.eth2.rp_filter=0 + + + @@ -235,13 +1840,10 @@ - - - - - + + @@ -259,5 +1861,9 @@ + + + + diff --git a/daemon/core/gui/dialogs/alerts.py b/daemon/core/gui/dialogs/alerts.py index b13f0797..9e430214 100644 --- a/daemon/core/gui/dialogs/alerts.py +++ b/daemon/core/gui/dialogs/alerts.py @@ -3,7 +3,7 @@ check engine light """ import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Optional from core.api.grpc.wrappers import ExceptionEvent, ExceptionLevel from core.gui.dialogs.dialog import Dialog @@ -19,7 +19,7 @@ class AlertsDialog(Dialog): super().__init__(app, "Alerts") self.tree: Optional[ttk.Treeview] = None self.codetext: Optional[CodeText] = None - self.alarm_map: dict[int, ExceptionEvent] = {} + self.alarm_map: Dict[int, ExceptionEvent] = {} self.draw() def draw(self) -> None: diff --git a/daemon/core/gui/dialogs/canvassizeandscale.py b/daemon/core/gui/dialogs/canvassizeandscale.py index 863d1174..e50bf986 100644 --- a/daemon/core/gui/dialogs/canvassizeandscale.py +++ b/daemon/core/gui/dialogs/canvassizeandscale.py @@ -7,7 +7,7 @@ from typing import TYPE_CHECKING from core.gui import validation from core.gui.dialogs.dialog import Dialog -from core.gui.graph.manager import CanvasManager +from core.gui.graph.graph import CanvasGraph from core.gui.themes import FRAME_PAD, PADX, PADY if TYPE_CHECKING: @@ -22,9 +22,9 @@ class SizeAndScaleDialog(Dialog): create an instance for size and scale object """ super().__init__(app, "Canvas Size and Scale") - self.manager: CanvasManager = self.app.manager + self.canvas: CanvasGraph = self.app.canvas self.section_font: font.Font = font.Font(weight=font.BOLD) - width, height = self.manager.current().current_dimensions + width, height = self.canvas.current_dimensions self.pixel_width: tk.IntVar = tk.IntVar(value=width) self.pixel_height: tk.IntVar = tk.IntVar(value=height) location = self.app.core.session.location @@ -189,7 +189,9 @@ class SizeAndScaleDialog(Dialog): def click_apply(self) -> None: width, height = self.pixel_width.get(), self.pixel_height.get() - self.manager.redraw_canvas((width, height)) + self.canvas.redraw_canvas((width, height)) + if self.canvas.wallpaper: + self.canvas.redraw_wallpaper() location = self.app.core.session.location location.x = self.x.get() location.y = self.y.get() diff --git a/daemon/core/gui/dialogs/canvaswallpaper.py b/daemon/core/gui/dialogs/canvaswallpaper.py index 5b0f27b3..629f9f36 100644 --- a/daemon/core/gui/dialogs/canvaswallpaper.py +++ b/daemon/core/gui/dialogs/canvaswallpaper.py @@ -4,17 +4,15 @@ set wallpaper import logging import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, List, Optional -from core.gui import images from core.gui.appconfig import BACKGROUNDS_PATH from core.gui.dialogs.dialog import Dialog from core.gui.graph.graph import CanvasGraph +from core.gui.images import Images from core.gui.themes import PADX, PADY from core.gui.widgets import image_chooser -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -25,14 +23,14 @@ class CanvasWallpaperDialog(Dialog): create an instance of CanvasWallpaper object """ super().__init__(app, "Canvas Background") - self.canvas: CanvasGraph = self.app.manager.current() + self.canvas: CanvasGraph = self.app.canvas self.scale_option: tk.IntVar = tk.IntVar(value=self.canvas.scale_option.get()) self.adjust_to_dim: tk.BooleanVar = tk.BooleanVar( value=self.canvas.adjust_to_dim.get() ) self.filename: tk.StringVar = tk.StringVar(value=self.canvas.wallpaper_file) self.image_label: Optional[ttk.Label] = None - self.options: list[ttk.Radiobutton] = [] + self.options: List[ttk.Radiobutton] = [] self.draw() def draw(self) -> None: @@ -134,7 +132,7 @@ class CanvasWallpaperDialog(Dialog): self.draw_preview() def draw_preview(self) -> None: - image = images.from_file(self.filename.get(), width=250, height=135) + image = Images.create(self.filename.get(), 250, 135) self.image_label.config(image=image) self.image_label.image = image @@ -163,11 +161,12 @@ class CanvasWallpaperDialog(Dialog): def click_apply(self) -> None: self.canvas.scale_option.set(self.scale_option.get()) self.canvas.adjust_to_dim.set(self.adjust_to_dim.get()) + self.canvas.show_grid.click_handler() filename = self.filename.get() if not filename: filename = None try: self.canvas.set_wallpaper(filename) except FileNotFoundError: - logger.error("invalid background: %s", filename) + logging.error("invalid background: %s", filename) self.destroy() diff --git a/daemon/core/gui/dialogs/colorpicker.py b/daemon/core/gui/dialogs/colorpicker.py index a27b1698..a2f131d4 100644 --- a/daemon/core/gui/dialogs/colorpicker.py +++ b/daemon/core/gui/dialogs/colorpicker.py @@ -3,7 +3,7 @@ custom color picker """ import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Optional, Tuple from core.gui import validation from core.gui.dialogs.dialog import Dialog @@ -13,36 +13,6 @@ if TYPE_CHECKING: from core.gui.app import Application -def get_rgb(red: int, green: int, blue: int) -> str: - """ - Convert rgb integers to an rgb hex code (#). - - :param red: red value - :param green: green value - :param blue: blue value - :return: rgb hex code - """ - return f"#{red:02x}{green:02x}{blue:02x}" - - -def get_rgb_values(hex_code: str) -> tuple[int, int, int]: - """ - Convert a valid rgb hex code (#) to rgb integers. - - :param hex_code: valid rgb hex code - :return: a tuple of red, blue, and green values - """ - if len(hex_code) == 4: - red = hex_code[1] - green = hex_code[2] - blue = hex_code[3] - else: - red = hex_code[1:3] - green = hex_code[3:5] - blue = hex_code[5:] - return int(red, 16), int(green, 16), int(blue, 16) - - class ColorPickerDialog(Dialog): def __init__( self, master: tk.BaseWidget, app: "Application", initcolor: str = "#000000" @@ -57,7 +27,7 @@ class ColorPickerDialog(Dialog): self.blue_label: Optional[ttk.Label] = None self.display: Optional[tk.Frame] = None self.color: str = initcolor - red, green, blue = get_rgb_values(initcolor) + red, green, blue = self.get_rgb(initcolor) self.red: tk.IntVar = tk.IntVar(value=red) self.blue: tk.IntVar = tk.IntVar(value=blue) self.green: tk.IntVar = tk.IntVar(value=green) @@ -96,7 +66,7 @@ class ColorPickerDialog(Dialog): ) scale.grid(row=0, column=2, sticky=tk.EW, padx=PADX) self.red_label = ttk.Label( - frame, background=get_rgb(self.red.get(), 0, 0), width=5 + frame, background="#%02x%02x%02x" % (self.red.get(), 0, 0), width=5 ) self.red_label.grid(row=0, column=3, sticky=tk.EW) @@ -119,7 +89,7 @@ class ColorPickerDialog(Dialog): ) scale.grid(row=0, column=2, sticky=tk.EW, padx=PADX) self.green_label = ttk.Label( - frame, background=get_rgb(0, self.green.get(), 0), width=5 + frame, background="#%02x%02x%02x" % (0, self.green.get(), 0), width=5 ) self.green_label.grid(row=0, column=3, sticky=tk.EW) @@ -142,7 +112,7 @@ class ColorPickerDialog(Dialog): ) scale.grid(row=0, column=2, sticky=tk.EW, padx=PADX) self.blue_label = ttk.Label( - frame, background=get_rgb(0, 0, self.blue.get()), width=5 + frame, background="#%02x%02x%02x" % (0, 0, self.blue.get()), width=5 ) self.blue_label.grid(row=0, column=3, sticky=tk.EW) @@ -180,27 +150,39 @@ class ColorPickerDialog(Dialog): self.color = self.hex.get() self.destroy() + def get_hex(self) -> str: + """ + convert current RGB values into hex color + """ + red = self.red_entry.get() + blue = self.blue_entry.get() + green = self.green_entry.get() + return "#%02x%02x%02x" % (int(red), int(green), int(blue)) + def current_focus(self, focus: str) -> None: self.focus = focus def update_color(self, arg1=None, arg2=None, arg3=None) -> None: if self.focus == "rgb": - red = int(self.red_entry.get() or 0) - blue = int(self.blue_entry.get() or 0) - green = int(self.green_entry.get() or 0) + red = self.red_entry.get() + blue = self.blue_entry.get() + green = self.green_entry.get() self.set_scale(red, green, blue) - hex_code = get_rgb(red, green, blue) - self.hex.set(hex_code) - self.display.config(background=hex_code) - self.set_label(red, green, blue) + if red and blue and green: + hex_code = "#%02x%02x%02x" % (int(red), int(green), int(blue)) + self.hex.set(hex_code) + self.display.config(background=hex_code) + self.set_label(red, green, blue) elif self.focus == "hex": hex_code = self.hex.get() if len(hex_code) == 4 or len(hex_code) == 7: - red, green, blue = get_rgb_values(hex_code) - self.set_entry(red, green, blue) - self.set_scale(red, green, blue) - self.display.config(background=hex_code) - self.set_label(red, green, blue) + red, green, blue = self.get_rgb(hex_code) + else: + return + self.set_entry(red, green, blue) + self.set_scale(red, green, blue) + self.display.config(background=hex_code) + self.set_label(str(red), str(green), str(blue)) def scale_callback(self, var: tk.IntVar, color_var: tk.IntVar) -> None: color_var.set(var.get()) @@ -217,7 +199,21 @@ class ColorPickerDialog(Dialog): self.green.set(green) self.blue.set(blue) - def set_label(self, red: int, green: int, blue: int) -> None: - self.red_label.configure(background=get_rgb(red, 0, 0)) - self.green_label.configure(background=get_rgb(0, green, 0)) - self.blue_label.configure(background=get_rgb(0, 0, blue)) + def set_label(self, red: str, green: str, blue: str) -> None: + self.red_label.configure(background="#%02x%02x%02x" % (int(red), 0, 0)) + self.green_label.configure(background="#%02x%02x%02x" % (0, int(green), 0)) + self.blue_label.configure(background="#%02x%02x%02x" % (0, 0, int(blue))) + + def get_rgb(self, hex_code: str) -> Tuple[int, int, int]: + """ + convert a valid hex code to RGB values + """ + if len(hex_code) == 4: + red = hex_code[1] + green = hex_code[2] + blue = hex_code[3] + else: + red = hex_code[1:3] + green = hex_code[3:5] + blue = hex_code[5:] + return int(red, 16), int(green, 16), int(blue, 16) diff --git a/daemon/core/gui/dialogs/configserviceconfig.py b/daemon/core/gui/dialogs/configserviceconfig.py index 0e873a79..14388f5a 100644 --- a/daemon/core/gui/dialogs/configserviceconfig.py +++ b/daemon/core/gui/dialogs/configserviceconfig.py @@ -4,7 +4,7 @@ Service configuration dialog import logging import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, List, Optional, Set import grpc @@ -18,8 +18,6 @@ from core.gui.dialogs.dialog import Dialog from core.gui.themes import FRAME_PAD, PADX, PADY from core.gui.widgets import CodeText, ConfigFrame, ListboxScroll -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application from core.gui.coreclient import CoreClient @@ -34,23 +32,24 @@ class ConfigServiceConfigDialog(Dialog): self.core: "CoreClient" = app.core self.node: Node = node self.service_name: str = service_name - self.radiovar: tk.IntVar = tk.IntVar(value=2) - self.directories: list[str] = [] - self.templates: list[str] = [] - self.rendered: dict[str, str] = {} - self.dependencies: list[str] = [] - self.executables: list[str] = [] - self.startup_commands: list[str] = [] - self.validation_commands: list[str] = [] - self.shutdown_commands: list[str] = [] - self.default_startup: list[str] = [] - self.default_validate: list[str] = [] - self.default_shutdown: list[str] = [] + self.radiovar: tk.IntVar = tk.IntVar() + self.radiovar.set(2) + self.directories: List[str] = [] + self.templates: List[str] = [] + self.dependencies: List[str] = [] + self.executables: List[str] = [] + self.startup_commands: List[str] = [] + self.validation_commands: List[str] = [] + self.shutdown_commands: List[str] = [] + self.default_startup: List[str] = [] + self.default_validate: List[str] = [] + self.default_shutdown: List[str] = [] self.validation_mode: Optional[ServiceValidationMode] = None self.validation_time: Optional[int] = None - self.validation_period: tk.DoubleVar = tk.DoubleVar() - self.modes: list[str] = [] - self.mode_configs: dict[str, dict[str, str]] = {} + self.validation_period: tk.StringVar = tk.StringVar() + self.modes: List[str] = [] + self.mode_configs: Dict[str, Dict[str, str]] = {} + self.notebook: Optional[ttk.Notebook] = None self.templates_combobox: Optional[ttk.Combobox] = None self.modes_combobox: Optional[ttk.Combobox] = None @@ -60,14 +59,13 @@ class ConfigServiceConfigDialog(Dialog): self.validation_time_entry: Optional[ttk.Entry] = None self.validation_mode_entry: Optional[ttk.Entry] = None self.template_text: Optional[CodeText] = None - self.rendered_text: Optional[CodeText] = None self.validation_period_entry: Optional[ttk.Entry] = None - self.original_service_files: dict[str, str] = {} - self.temp_service_files: dict[str, str] = {} - self.modified_files: set[str] = set() + self.original_service_files: Dict[str, str] = {} + self.temp_service_files: Dict[str, str] = {} + self.modified_files: Set[str] = set() self.config_frame: Optional[ConfigFrame] = None - self.default_config: dict[str, str] = {} - self.config: dict[str, ConfigOption] = {} + self.default_config: Dict[str, str] = {} + self.config: Dict[str, ConfigOption] = {} self.has_error: bool = False self.load() if not self.has_error: @@ -75,7 +73,7 @@ class ConfigServiceConfigDialog(Dialog): def load(self) -> None: try: - self.core.start_session(definition=True) + self.core.create_nodes_and_links() service = self.core.config_services[self.service_name] self.dependencies = service.dependencies[:] self.executables = service.executables[:] @@ -87,23 +85,19 @@ class ConfigServiceConfigDialog(Dialog): self.validation_mode = service.validation_mode self.validation_time = service.validation_timer self.validation_period.set(service.validation_period) - defaults = self.core.get_config_service_defaults( - self.node.id, self.service_name - ) - self.original_service_files = defaults.templates + + response = self.core.client.get_config_service_defaults(self.service_name) + self.original_service_files = response.templates self.temp_service_files = dict(self.original_service_files) - self.modes = sorted(defaults.modes) - self.mode_configs = defaults.modes - self.config = ConfigOption.from_dict(defaults.config) + self.modes = sorted(x.name for x in response.modes) + self.mode_configs = {x.name: x.config for x in response.modes} + self.config = ConfigOption.from_dict(response.config) self.default_config = {x.name: x.value for x in self.config.values()} - self.rendered = self.core.get_config_service_rendered( - self.node.id, self.service_name - ) service_config = self.node.config_service_configs.get(self.service_name) if service_config: for key, value in service_config.config.items(): self.config[key].value = value - logger.info("default config: %s", self.default_config) + logging.info("default config: %s", self.default_config) for file, data in service_config.templates.items(): self.modified_files.add(file) self.temp_service_files[file] = data @@ -114,6 +108,7 @@ class ConfigServiceConfigDialog(Dialog): def draw(self) -> None: self.top.columnconfigure(0, weight=1) self.top.rowconfigure(0, weight=1) + # draw notebook self.notebook = ttk.Notebook(self.top) self.notebook.grid(sticky=tk.NSEW, pady=PADY) @@ -128,7 +123,6 @@ class ConfigServiceConfigDialog(Dialog): tab = ttk.Frame(self.notebook, padding=FRAME_PAD) tab.grid(sticky=tk.NSEW) tab.columnconfigure(0, weight=1) - tab.rowconfigure(2, weight=1) self.notebook.add(tab, text="Directories/Files") label = ttk.Label( @@ -141,54 +135,33 @@ class ConfigServiceConfigDialog(Dialog): frame.columnconfigure(1, weight=1) label = ttk.Label(frame, text="Directories") label.grid(row=0, column=0, sticky=tk.W, padx=PADX) - state = "readonly" if self.directories else tk.DISABLED - directories_combobox = ttk.Combobox(frame, values=self.directories, state=state) + directories_combobox = ttk.Combobox( + frame, values=self.directories, state="readonly" + ) directories_combobox.grid(row=0, column=1, sticky=tk.EW, pady=PADY) if self.directories: directories_combobox.current(0) - label = ttk.Label(frame, text="Files") + + label = ttk.Label(frame, text="Templates") label.grid(row=1, column=0, sticky=tk.W, padx=PADX) - state = "readonly" if self.templates else tk.DISABLED self.templates_combobox = ttk.Combobox( - frame, values=self.templates, state=state + frame, values=self.templates, state="readonly" ) self.templates_combobox.bind( "<>", self.handle_template_changed ) self.templates_combobox.grid(row=1, column=1, sticky=tk.EW, pady=PADY) - # draw file template tab - notebook = ttk.Notebook(tab) - notebook.rowconfigure(0, weight=1) - notebook.columnconfigure(0, weight=1) - notebook.grid(sticky=tk.NSEW, pady=PADY) - # draw rendered file tab - rendered_tab = ttk.Frame(notebook, padding=FRAME_PAD) - rendered_tab.grid(sticky=tk.NSEW) - rendered_tab.rowconfigure(0, weight=1) - rendered_tab.columnconfigure(0, weight=1) - notebook.add(rendered_tab, text="Rendered") - self.rendered_text = CodeText(rendered_tab) - self.rendered_text.grid(sticky=tk.NSEW) - self.rendered_text.text.bind("", self.update_template_file_data) - # draw template file tab - template_tab = ttk.Frame(notebook, padding=FRAME_PAD) - template_tab.grid(sticky=tk.NSEW) - template_tab.rowconfigure(0, weight=1) - template_tab.columnconfigure(0, weight=1) - notebook.add(template_tab, text="Template") - self.template_text = CodeText(template_tab) + + self.template_text = CodeText(tab) self.template_text.grid(sticky=tk.NSEW) - self.template_text.text.bind("", self.update_template_file_data) + tab.rowconfigure(self.template_text.grid_info()["row"], weight=1) if self.templates: self.templates_combobox.current(0) - template_name = self.templates[0] - temp_data = self.temp_service_files[template_name] - self.template_text.set_text(temp_data) - rendered_data = self.rendered[template_name] - self.rendered_text.set_text(rendered_data) - else: - self.template_text.text.configure(state=tk.DISABLED) - self.rendered_text.text.configure(state=tk.DISABLED) + self.template_text.text.delete(1.0, "end") + self.template_text.text.insert( + "end", self.temp_service_files[self.templates[0]] + ) + self.template_text.text.bind("", self.update_template_file_data) def draw_tab_config(self) -> None: tab = ttk.Frame(self.notebook, padding=FRAME_PAD) @@ -208,7 +181,7 @@ class ConfigServiceConfigDialog(Dialog): self.modes_combobox.bind("<>", self.handle_mode_changed) self.modes_combobox.grid(row=0, column=1, sticky=tk.EW, pady=PADY) - logger.info("config service config: %s", self.config) + logging.info("config service config: %s", self.config) self.config_frame = ConfigFrame(tab, self.app, self.config) self.config_frame.draw_config() self.config_frame.grid(sticky=tk.NSEW, pady=PADY) @@ -268,7 +241,7 @@ class ConfigServiceConfigDialog(Dialog): label = ttk.Label(frame, text="Validation Time") label.grid(row=0, column=0, sticky=tk.W, padx=PADX) self.validation_time_entry = ttk.Entry(frame) - self.validation_time_entry.insert("end", str(self.validation_time)) + self.validation_time_entry.insert("end", self.validation_time) self.validation_time_entry.config(state=tk.DISABLED) self.validation_time_entry.grid(row=0, column=1, sticky=tk.EW, pady=PADY) @@ -335,9 +308,9 @@ class ConfigServiceConfigDialog(Dialog): current_listbox.itemconfig(current_listbox.curselection()[0], bg="") self.destroy() return - service_config = self.node.config_service_configs.setdefault( - self.service_name, ConfigServiceData() - ) + service_config = self.node.config_service_configs.get(self.service_name) + if not service_config: + service_config = ConfigServiceData() if self.config_frame: self.config_frame.parse_config() service_config.config = {x.name: x.value for x in self.config.values()} @@ -348,25 +321,20 @@ class ConfigServiceConfigDialog(Dialog): self.destroy() def handle_template_changed(self, event: tk.Event) -> None: - template_name = self.templates_combobox.get() - temp_data = self.temp_service_files[template_name] - self.template_text.set_text(temp_data) - rendered = self.rendered[template_name] - self.rendered_text.set_text(rendered) + template = self.templates_combobox.get() + self.template_text.text.delete(1.0, "end") + self.template_text.text.insert("end", self.temp_service_files[template]) def handle_mode_changed(self, event: tk.Event) -> None: mode = self.modes_combobox.get() config = self.mode_configs[mode] - logger.info("mode config: %s", config) + logging.info("mode config: %s", config) self.config_frame.set_values(config) - def update_template_file_data(self, _event: tk.Event) -> None: + def update_template_file_data(self, event: tk.Event) -> None: + scrolledtext = event.widget template = self.templates_combobox.get() - self.temp_service_files[template] = self.rendered_text.get_text() - if self.rendered[template] != self.temp_service_files[template]: - self.modified_files.add(template) - return - self.temp_service_files[template] = self.template_text.get_text() + self.temp_service_files[template] = scrolledtext.get(1.0, "end") if self.temp_service_files[template] != self.original_service_files[template]: self.modified_files.add(template) else: @@ -381,33 +349,23 @@ class ConfigServiceConfigDialog(Dialog): return has_custom_templates or has_custom_config def click_defaults(self) -> None: - # clear all saved state data - self.modified_files.clear() self.node.config_service_configs.pop(self.service_name, None) - self.temp_service_files = dict(self.original_service_files) - # reset session definition and retrieve default rendered templates - self.core.start_session(definition=True) - self.rendered = self.core.get_config_service_rendered( - self.node.id, self.service_name - ) - logger.info( + logging.info( "cleared config service config: %s", self.node.config_service_configs ) - # reset current selected file data and config data, if present - template_name = self.templates_combobox.get() - temp_data = self.temp_service_files[template_name] - self.template_text.set_text(temp_data) - rendered_data = self.rendered[template_name] - self.rendered_text.set_text(rendered_data) + self.temp_service_files = dict(self.original_service_files) + filename = self.templates_combobox.get() + self.template_text.text.delete(1.0, "end") + self.template_text.text.insert("end", self.temp_service_files[filename]) if self.config_frame: - logger.info("resetting defaults: %s", self.default_config) + logging.info("resetting defaults: %s", self.default_config) self.config_frame.set_values(self.default_config) def click_copy(self) -> None: pass def append_commands( - self, commands: list[str], listbox: tk.Listbox, to_add: list[str] + self, commands: List[str], listbox: tk.Listbox, to_add: List[str] ) -> None: for cmd in to_add: commands.append(cmd) diff --git a/daemon/core/gui/dialogs/copyserviceconfig.py b/daemon/core/gui/dialogs/copyserviceconfig.py index 6b2f4927..b205e175 100644 --- a/daemon/core/gui/dialogs/copyserviceconfig.py +++ b/daemon/core/gui/dialogs/copyserviceconfig.py @@ -4,7 +4,7 @@ copy service config dialog import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Optional from core.gui.dialogs.dialog import Dialog from core.gui.themes import PADX, PADY @@ -29,7 +29,7 @@ class CopyServiceConfigDialog(Dialog): self.service: str = service self.file_name: str = file_name self.listbox: Optional[tk.Listbox] = None - self.nodes: dict[str, int] = {} + self.nodes: Dict[str, int] = {} self.draw() def draw(self) -> None: diff --git a/daemon/core/gui/dialogs/customnodes.py b/daemon/core/gui/dialogs/customnodes.py index ea4421e8..53451ab1 100644 --- a/daemon/core/gui/dialogs/customnodes.py +++ b/daemon/core/gui/dialogs/customnodes.py @@ -2,32 +2,31 @@ import logging import tkinter as tk from pathlib import Path from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Optional, Set from PIL.ImageTk import PhotoImage -from core.gui import images +from core.gui import nodeutils from core.gui.appconfig import ICONS_PATH, CustomNode from core.gui.dialogs.dialog import Dialog +from core.gui.images import Images from core.gui.nodeutils import NodeDraw from core.gui.themes import FRAME_PAD, PADX, PADY from core.gui.widgets import CheckboxList, ListboxScroll, image_chooser -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application class ServicesSelectDialog(Dialog): def __init__( - self, master: tk.BaseWidget, app: "Application", current_services: set[str] + self, master: tk.BaseWidget, app: "Application", current_services: Set[str] ) -> None: - super().__init__(app, "Node Config Services", master=master) + super().__init__(app, "Node Services", master=master) self.groups: Optional[ListboxScroll] = None self.services: Optional[CheckboxList] = None self.current: Optional[ListboxScroll] = None - self.current_services: set[str] = current_services + self.current_services: Set[str] = current_services self.draw() def draw(self) -> None: @@ -45,7 +44,7 @@ class ServicesSelectDialog(Dialog): label_frame.columnconfigure(0, weight=1) self.groups = ListboxScroll(label_frame) self.groups.grid(sticky=tk.NSEW) - for group in sorted(self.app.core.config_services_groups): + for group in sorted(self.app.core.services): self.groups.listbox.insert(tk.END, group) self.groups.listbox.bind("<>", self.handle_group_change) self.groups.listbox.selection_set(0) @@ -78,15 +77,15 @@ class ServicesSelectDialog(Dialog): button.grid(row=0, column=1, sticky=tk.EW) # trigger group change - self.handle_group_change() + self.groups.listbox.event_generate("<>") - def handle_group_change(self, event: tk.Event = None) -> None: + def handle_group_change(self, event: tk.Event) -> None: selection = self.groups.listbox.curselection() if selection: index = selection[0] group = self.groups.listbox.get(index) self.services.clear() - for name in sorted(self.app.core.config_services_groups[group]): + for name in sorted(self.app.core.services[group]): checked = name in self.current_services self.services.add(name, checked) @@ -114,7 +113,7 @@ class CustomNodesDialog(Dialog): self.image_button: Optional[ttk.Button] = None self.image: Optional[PhotoImage] = None self.image_file: Optional[str] = None - self.services: set[str] = set() + self.services: Set[str] = set() self.selected: Optional[str] = None self.selected_index: Optional[int] = None self.draw() @@ -147,7 +146,7 @@ class CustomNodesDialog(Dialog): frame, text="Icon", compound=tk.LEFT, command=self.click_icon ) self.image_button.grid(sticky=tk.EW, pady=PADY) - button = ttk.Button(frame, text="Config Services", command=self.click_services) + button = ttk.Button(frame, text="Services", command=self.click_services) button.grid(sticky=tk.EW) def draw_node_buttons(self) -> None: @@ -191,13 +190,13 @@ class CustomNodesDialog(Dialog): def click_icon(self) -> None: file_path = image_chooser(self, ICONS_PATH) if file_path: - image = images.from_file(file_path, width=images.NODE_SIZE) + image = Images.create(file_path, nodeutils.ICON_SIZE) self.image = image self.image_file = file_path self.image_button.config(image=self.image) def click_services(self) -> None: - dialog = ServicesSelectDialog(self, self.app, set(self.services)) + dialog = ServicesSelectDialog(self, self.app, self.services) dialog.show() if dialog.current_services is not None: self.services.clear() @@ -211,17 +210,17 @@ class CustomNodesDialog(Dialog): name, node_draw.image_file, list(node_draw.services) ) self.app.guiconfig.nodes.append(custom_node) - logger.info("saving custom nodes: %s", self.app.guiconfig.nodes) + logging.info("saving custom nodes: %s", self.app.guiconfig.nodes) self.app.save_config() self.destroy() def click_create(self) -> None: name = self.name.get() if name not in self.app.core.custom_nodes: - image_file = str(Path(self.image_file).absolute()) + image_file = Path(self.image_file).stem custom_node = CustomNode(name, image_file, list(self.services)) node_draw = NodeDraw.from_custom(custom_node) - logger.info( + logging.info( "created new custom node (%s), image file (%s), services: (%s)", name, image_file, @@ -238,14 +237,14 @@ class CustomNodesDialog(Dialog): self.selected = name node_draw = self.app.core.custom_nodes.pop(previous_name) node_draw.model = name - node_draw.image_file = str(Path(self.image_file).absolute()) + node_draw.image_file = Path(self.image_file).stem node_draw.image = self.image - node_draw.services = set(self.services) - logger.debug( + node_draw.services = self.services + logging.debug( "edit custom node (%s), image: (%s), services (%s)", - node_draw.model, - node_draw.image_file, - node_draw.services, + name, + self.image_file, + self.services, ) self.app.core.custom_nodes[name] = node_draw self.nodes_list.listbox.delete(self.selected_index) diff --git a/daemon/core/gui/dialogs/dialog.py b/daemon/core/gui/dialogs/dialog.py index 5233bb27..ce05a5d5 100644 --- a/daemon/core/gui/dialogs/dialog.py +++ b/daemon/core/gui/dialogs/dialog.py @@ -2,8 +2,7 @@ import tkinter as tk from tkinter import ttk from typing import TYPE_CHECKING -from core.gui import images -from core.gui.images import ImageEnum +from core.gui.images import ImageEnum, Images from core.gui.themes import DIALOG_PAD if TYPE_CHECKING: @@ -26,7 +25,7 @@ class Dialog(tk.Toplevel): self.modal: bool = modal self.title(title) self.protocol("WM_DELETE_WINDOW", self.destroy) - image = images.from_enum(ImageEnum.CORE, width=images.DIALOG_SIZE) + image = Images.get(ImageEnum.CORE, 16) self.tk.call("wm", "iconphoto", self._w, image) self.columnconfigure(0, weight=1) self.rowconfigure(0, weight=1) diff --git a/daemon/core/gui/dialogs/emaneconfig.py b/daemon/core/gui/dialogs/emaneconfig.py index 00eda694..0829907a 100644 --- a/daemon/core/gui/dialogs/emaneconfig.py +++ b/daemon/core/gui/dialogs/emaneconfig.py @@ -4,14 +4,13 @@ emane configuration import tkinter as tk import webbrowser from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, List, Optional import grpc from core.api.grpc.wrappers import ConfigOption, Node -from core.gui import images from core.gui.dialogs.dialog import Dialog -from core.gui.images import ImageEnum +from core.gui.images import ImageEnum, Images from core.gui.themes import PADX, PADY from core.gui.widgets import ConfigFrame @@ -19,6 +18,40 @@ if TYPE_CHECKING: from core.gui.app import Application +class GlobalEmaneDialog(Dialog): + def __init__(self, master: tk.BaseWidget, app: "Application") -> None: + super().__init__(app, "EMANE Configuration", master=master) + self.config_frame: Optional[ConfigFrame] = None + self.enabled: bool = not self.app.core.is_runtime() + self.draw() + + def draw(self) -> None: + self.top.columnconfigure(0, weight=1) + self.top.rowconfigure(0, weight=1) + session = self.app.core.session + self.config_frame = ConfigFrame( + self.top, self.app, session.emane_config, self.enabled + ) + self.config_frame.draw_config() + self.config_frame.grid(sticky=tk.NSEW, pady=PADY) + self.draw_buttons() + + def draw_buttons(self) -> None: + frame = ttk.Frame(self.top) + frame.grid(sticky=tk.EW) + for i in range(2): + frame.columnconfigure(i, weight=1) + state = tk.NORMAL if self.enabled else tk.DISABLED + button = ttk.Button(frame, text="Apply", command=self.click_apply, state=state) + button.grid(row=0, column=0, sticky=tk.EW, padx=PADX) + button = ttk.Button(frame, text="Cancel", command=self.destroy) + button.grid(row=0, column=1, sticky=tk.EW) + + def click_apply(self) -> None: + self.config_frame.parse_config() + self.destroy() + + class EmaneModelDialog(Dialog): def __init__( self, @@ -37,13 +70,11 @@ class EmaneModelDialog(Dialog): self.has_error: bool = False try: config = self.node.emane_model_configs.get((self.model, self.iface_id)) - if not config: - config = self.node.emane_model_configs.get((self.model, None)) if not config: config = self.app.core.get_emane_model_config( self.node.id, self.model, self.iface_id ) - self.config: dict[str, ConfigOption] = config + self.config: Dict[str, ConfigOption] = config self.draw() except grpc.RpcError as e: self.app.show_grpc_exception("Get EMANE Config Error", e) @@ -82,8 +113,8 @@ class EmaneConfigDialog(Dialog): self.node: Node = node self.radiovar: tk.IntVar = tk.IntVar() self.radiovar.set(1) - self.emane_models: list[str] = [ - x.split("_")[1] for x in self.app.core.emane_models + self.emane_models: List[str] = [ + x.split("_")[1] for x in self.app.core.session.emane_models ] model = self.node.emane.split("_")[1] self.emane_model: tk.StringVar = tk.StringVar(value=model) @@ -112,7 +143,7 @@ class EmaneConfigDialog(Dialog): ) label.grid(pady=PADY) - image = images.from_enum(ImageEnum.EDITNODE, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.EDITNODE, 16) button = ttk.Button( self.top, image=image, @@ -147,8 +178,10 @@ class EmaneConfigDialog(Dialog): def draw_emane_buttons(self) -> None: frame = ttk.Frame(self.top) frame.grid(sticky=tk.EW, pady=PADY) - frame.columnconfigure(0, weight=1) - image = images.from_enum(ImageEnum.EDITNODE, width=images.BUTTON_SIZE) + for i in range(2): + frame.columnconfigure(i, weight=1) + + image = Images.get(ImageEnum.EDITNODE, 16) self.emane_model_button = ttk.Button( frame, text=f"{self.emane_model.get()} options", @@ -157,7 +190,18 @@ class EmaneConfigDialog(Dialog): command=self.click_model_config, ) self.emane_model_button.image = image - self.emane_model_button.grid(padx=PADX, sticky=tk.EW) + self.emane_model_button.grid(row=0, column=0, padx=PADX, sticky=tk.EW) + + image = Images.get(ImageEnum.EDITNODE, 16) + button = ttk.Button( + frame, + text="EMANE options", + image=image, + compound=tk.RIGHT, + command=self.click_emane_config, + ) + button.image = image + button.grid(row=0, column=1, sticky=tk.EW) def draw_apply_and_cancel(self) -> None: frame = ttk.Frame(self.top) @@ -170,6 +214,10 @@ class EmaneConfigDialog(Dialog): button = ttk.Button(frame, text="Cancel", command=self.destroy) button.grid(row=0, column=1, sticky=tk.EW) + def click_emane_config(self) -> None: + dialog = GlobalEmaneDialog(self, self.app) + dialog.show() + def click_model_config(self) -> None: """ draw emane model configuration diff --git a/daemon/core/gui/dialogs/error.py b/daemon/core/gui/dialogs/error.py index 726f8617..9d215e82 100644 --- a/daemon/core/gui/dialogs/error.py +++ b/daemon/core/gui/dialogs/error.py @@ -2,9 +2,8 @@ import tkinter as tk from tkinter import ttk from typing import TYPE_CHECKING, Optional -from core.gui import images from core.gui.dialogs.dialog import Dialog -from core.gui.images import ImageEnum +from core.gui.images import ImageEnum, Images from core.gui.themes import PADY from core.gui.widgets import CodeText @@ -13,11 +12,9 @@ if TYPE_CHECKING: class ErrorDialog(Dialog): - def __init__( - self, app: "Application", title: str, message: str, details: str - ) -> None: - super().__init__(app, title) - self.message: str = message + def __init__(self, app: "Application", title: str, details: str) -> None: + super().__init__(app, "CORE Exception") + self.title: str = title self.details: str = details self.error_message: Optional[CodeText] = None self.draw() @@ -25,15 +22,15 @@ class ErrorDialog(Dialog): def draw(self) -> None: self.top.columnconfigure(0, weight=1) self.top.rowconfigure(1, weight=1) - image = images.from_enum(ImageEnum.ERROR, width=images.ERROR_SIZE) + image = Images.get(ImageEnum.ERROR, 24) label = ttk.Label( - self.top, text=self.message, image=image, compound=tk.LEFT, anchor=tk.CENTER + self.top, text=self.title, image=image, compound=tk.LEFT, anchor=tk.CENTER ) label.image = image - label.grid(sticky=tk.W, pady=PADY) + label.grid(sticky=tk.EW, pady=PADY) self.error_message = CodeText(self.top) self.error_message.text.insert("1.0", self.details) self.error_message.text.config(state=tk.DISABLED) - self.error_message.grid(sticky=tk.EW, pady=PADY) + self.error_message.grid(sticky=tk.NSEW, pady=PADY) button = ttk.Button(self.top, text="Close", command=lambda: self.destroy()) button.grid(sticky=tk.EW) diff --git a/daemon/core/gui/dialogs/executepython.py b/daemon/core/gui/dialogs/executepython.py index 8c9b31ba..0bef9dc1 100644 --- a/daemon/core/gui/dialogs/executepython.py +++ b/daemon/core/gui/dialogs/executepython.py @@ -7,8 +7,6 @@ from core.gui.appconfig import SCRIPT_PATH from core.gui.dialogs.dialog import Dialog from core.gui.themes import FRAME_PAD, PADX -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -85,6 +83,6 @@ class ExecutePythonDialog(Dialog): def script_execute(self) -> None: file = self.file_entry.get() options = self.option_entry.get() - logger.info("Execute %s with options %s", file, options) - self.app.core.execute_script(file, options) + logging.info("Execute %s with options %s", file, options) + self.app.core.execute_script(file) self.destroy() diff --git a/daemon/core/gui/dialogs/find.py b/daemon/core/gui/dialogs/find.py index 54be81b0..6bfac47b 100644 --- a/daemon/core/gui/dialogs/find.py +++ b/daemon/core/gui/dialogs/find.py @@ -6,8 +6,6 @@ from typing import TYPE_CHECKING, Optional from core.gui.dialogs.dialog import Dialog from core.gui.themes import FRAME_PAD, PADX, PADY -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -107,13 +105,9 @@ class FindDialog(Dialog): self.tree.selection_set(results[0]) def close_dialog(self) -> None: - self.clear_find() + self.app.canvas.delete("find") self.destroy() - def clear_find(self): - for canvas in self.app.manager.all(): - canvas.delete("find") - def click_select(self, _event: tk.Event = None) -> None: """ find the node that matches search criteria, circle around that node @@ -122,13 +116,13 @@ class FindDialog(Dialog): """ item = self.tree.selection() if item: - self.clear_find() + self.app.canvas.delete("find") node_id = int(self.tree.item(item, "text")) canvas_node = self.app.core.get_canvas_node(node_id) - self.app.manager.select(canvas_node.canvas.id) - x0, y0, x1, y1 = canvas_node.canvas.bbox(canvas_node.id) + + x0, y0, x1, y1 = self.app.canvas.bbox(canvas_node.id) dist = 5 * self.app.guiconfig.scale - canvas_node.canvas.create_oval( + self.app.canvas.create_oval( x0 - dist, y0 - dist, x1 + dist, @@ -138,11 +132,11 @@ class FindDialog(Dialog): width=3.0 * self.app.guiconfig.scale, ) - _x, _y, _, _ = canvas_node.canvas.bbox(canvas_node.id) - oid = canvas_node.canvas.find_withtag("rectangle") - x0, y0, x1, y1 = canvas_node.canvas.bbox(oid[0]) - logger.debug("Dist to most left: %s", abs(x0 - _x)) - logger.debug("White canvas width: %s", abs(x0 - x1)) + _x, _y, _, _ = self.app.canvas.bbox(canvas_node.id) + oid = self.app.canvas.find_withtag("rectangle") + x0, y0, x1, y1 = self.app.canvas.bbox(oid[0]) + logging.debug("Dist to most left: %s", abs(x0 - _x)) + logging.debug("White canvas width: %s", abs(x0 - x1)) # calculate the node's location # (as fractions of white canvas's width and height) @@ -156,5 +150,5 @@ class FindDialog(Dialog): xscroll_fraction = xscroll_fraction - 0.05 if yscroll_fraction > 0.05: yscroll_fraction = yscroll_fraction - 0.05 - canvas_node.canvas.xview_moveto(xscroll_fraction) - canvas_node.canvas.yview_moveto(yscroll_fraction) + self.app.canvas.xview_moveto(xscroll_fraction) + self.app.canvas.yview_moveto(yscroll_fraction) diff --git a/daemon/core/gui/dialogs/hooks.py b/daemon/core/gui/dialogs/hooks.py index 391df18f..474dc2d0 100644 --- a/daemon/core/gui/dialogs/hooks.py +++ b/daemon/core/gui/dialogs/hooks.py @@ -1,5 +1,5 @@ import tkinter as tk -from tkinter import messagebox, ttk +from tkinter import ttk from typing import TYPE_CHECKING, Optional from core.api.grpc.wrappers import Hook, SessionState @@ -91,13 +91,6 @@ class HookDialog(Dialog): self.hook.file = file_name self.hook.data = data else: - if file_name in self.app.core.session.hooks: - messagebox.showerror( - "Hook Error", - f"Hook {file_name} already exists!", - parent=self.master, - ) - return self.hook = Hook(state=state, file=file_name, data=data) self.destroy() diff --git a/daemon/core/gui/dialogs/ipdialog.py b/daemon/core/gui/dialogs/ipdialog.py index 99388548..a09ca097 100644 --- a/daemon/core/gui/dialogs/ipdialog.py +++ b/daemon/core/gui/dialogs/ipdialog.py @@ -1,6 +1,6 @@ import tkinter as tk from tkinter import messagebox, ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, List, Optional import netaddr @@ -17,14 +17,12 @@ class IpConfigDialog(Dialog): super().__init__(app, "IP Configuration") self.ip4: str = self.app.guiconfig.ips.ip4 self.ip6: str = self.app.guiconfig.ips.ip6 - self.ip4s: list[str] = self.app.guiconfig.ips.ip4s - self.ip6s: list[str] = self.app.guiconfig.ips.ip6s + self.ip4s: List[str] = self.app.guiconfig.ips.ip4s + self.ip6s: List[str] = self.app.guiconfig.ips.ip6s self.ip4_entry: Optional[ttk.Entry] = None self.ip4_listbox: Optional[ListboxScroll] = None self.ip6_entry: Optional[ttk.Entry] = None self.ip6_listbox: Optional[ListboxScroll] = None - self.enable_ip4 = tk.BooleanVar(value=self.app.guiconfig.ips.enable_ip4) - self.enable_ip6 = tk.BooleanVar(value=self.app.guiconfig.ips.enable_ip6) self.draw() def draw(self) -> None: @@ -38,19 +36,10 @@ class IpConfigDialog(Dialog): frame.rowconfigure(0, weight=1) frame.grid(sticky=tk.NSEW, pady=PADY) - ip4_checkbox = ttk.Checkbutton( - frame, text="Enable IP4?", variable=self.enable_ip4 - ) - ip4_checkbox.grid(row=0, column=0, sticky=tk.EW) - ip6_checkbox = ttk.Checkbutton( - frame, text="Enable IP6?", variable=self.enable_ip6 - ) - ip6_checkbox.grid(row=0, column=1, sticky=tk.EW) - ip4_frame = ttk.LabelFrame(frame, text="IPv4", padding=FRAME_PAD) ip4_frame.columnconfigure(0, weight=1) - ip4_frame.rowconfigure(1, weight=1) - ip4_frame.grid(row=1, column=0, stick=tk.NSEW) + ip4_frame.rowconfigure(0, weight=1) + ip4_frame.grid(row=0, column=0, stick="nsew") self.ip4_listbox = ListboxScroll(ip4_frame) self.ip4_listbox.listbox.bind("<>", self.select_ip4) self.ip4_listbox.grid(sticky=tk.NSEW, pady=PADY) @@ -74,7 +63,7 @@ class IpConfigDialog(Dialog): ip6_frame = ttk.LabelFrame(frame, text="IPv6", padding=FRAME_PAD) ip6_frame.columnconfigure(0, weight=1) ip6_frame.rowconfigure(0, weight=1) - ip6_frame.grid(row=1, column=1, stick=tk.NSEW) + ip6_frame.grid(row=0, column=1, stick="nsew") self.ip6_listbox = ListboxScroll(ip6_frame) self.ip6_listbox.listbox.bind("<>", self.select_ip6) self.ip6_listbox.grid(sticky=tk.NSEW, pady=PADY) @@ -97,7 +86,7 @@ class IpConfigDialog(Dialog): # draw buttons frame = ttk.Frame(self.top) - frame.grid(stick=tk.EW) + frame.grid(stick="ew") for i in range(2): frame.columnconfigure(i, weight=1) button = ttk.Button(frame, text="Save", command=self.click_save) @@ -153,18 +142,10 @@ class IpConfigDialog(Dialog): ip6 = self.ip6_listbox.listbox.get(index) ip6s.append(ip6) ip_config = self.app.guiconfig.ips - ip_changed = False - if ip_config.ip4 != self.ip4: - ip_config.ip4 = self.ip4 - ip_changed = True - if ip_config.ip6 != self.ip6: - ip_config.ip6 = self.ip6 - ip_changed = True + ip_config.ip4 = self.ip4 + ip_config.ip6 = self.ip6 ip_config.ip4s = ip4s ip_config.ip6s = ip6s - ip_config.enable_ip4 = self.enable_ip4.get() - ip_config.enable_ip6 = self.enable_ip6.get() - if ip_changed: - self.app.core.ifaces_manager.update_ips(self.ip4, self.ip6) + self.app.core.ifaces_manager.update_ips(self.ip4, self.ip6) self.app.save_config() self.destroy() diff --git a/daemon/core/gui/dialogs/linkconfig.py b/daemon/core/gui/dialogs/linkconfig.py index 6b27d373..6cb22862 100644 --- a/daemon/core/gui/dialogs/linkconfig.py +++ b/daemon/core/gui/dialogs/linkconfig.py @@ -70,10 +70,10 @@ class LinkConfigurationDialog(Dialog): def draw(self) -> None: self.top.columnconfigure(0, weight=1) - src_label = self.edge.src.core_node.name + src_label = self.app.canvas.nodes[self.edge.src].core_node.name if self.edge.link.iface1: src_label += f":{self.edge.link.iface1.name}" - dst_label = self.edge.dst.core_node.name + dst_label = self.app.canvas.nodes[self.edge.dst].core_node.name if self.edge.link.iface2: dst_label += f":{self.edge.link.iface2.name}" label = ttk.Label( @@ -293,7 +293,7 @@ class LinkConfigurationDialog(Dialog): # update edge label self.edge.redraw() - self.edge.check_visibility() + self.edge.check_options() self.destroy() def change_symmetry(self) -> None: @@ -316,8 +316,10 @@ class LinkConfigurationDialog(Dialog): """ populate link config to the table """ - self.width.set(self.edge.width) - self.color.set(self.edge.color) + width = self.app.canvas.itemcget(self.edge.id, "width") + self.width.set(width) + color = self.app.canvas.itemcget(self.edge.id, "fill") + self.color.set(color) link = self.edge.link if link.options: self.bandwidth.set(str(link.options.bandwidth)) diff --git a/daemon/core/gui/dialogs/mobilityconfig.py b/daemon/core/gui/dialogs/mobilityconfig.py index 6a2991aa..b22c5fef 100644 --- a/daemon/core/gui/dialogs/mobilityconfig.py +++ b/daemon/core/gui/dialogs/mobilityconfig.py @@ -3,7 +3,7 @@ mobility configuration """ import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Optional import grpc @@ -26,7 +26,7 @@ class MobilityConfigDialog(Dialog): config = self.node.mobility_config if not config: config = self.app.core.get_mobility_config(self.node.id) - self.config: dict[str, ConfigOption] = config + self.config: Dict[str, ConfigOption] = config self.draw() except grpc.RpcError as e: self.app.show_grpc_exception("Get Mobility Config Error", e) diff --git a/daemon/core/gui/dialogs/mobilityplayer.py b/daemon/core/gui/dialogs/mobilityplayer.py index 7b6c4d9f..f27a3635 100644 --- a/daemon/core/gui/dialogs/mobilityplayer.py +++ b/daemon/core/gui/dialogs/mobilityplayer.py @@ -84,17 +84,17 @@ class MobilityPlayerDialog(Dialog): for i in range(3): frame.columnconfigure(i, weight=1) - image = self.app.get_enum_icon(ImageEnum.START, width=ICON_SIZE) + image = self.app.get_icon(ImageEnum.START, ICON_SIZE) self.play_button = ttk.Button(frame, image=image, command=self.click_play) self.play_button.image = image self.play_button.grid(row=0, column=0, sticky=tk.EW, padx=PADX) - image = self.app.get_enum_icon(ImageEnum.PAUSE, width=ICON_SIZE) + image = self.app.get_icon(ImageEnum.PAUSE, ICON_SIZE) self.pause_button = ttk.Button(frame, image=image, command=self.click_pause) self.pause_button.image = image self.pause_button.grid(row=0, column=1, sticky=tk.EW, padx=PADX) - image = self.app.get_enum_icon(ImageEnum.STOP, width=ICON_SIZE) + image = self.app.get_icon(ImageEnum.STOP, ICON_SIZE) self.stop_button = ttk.Button(frame, image=image, command=self.click_stop) self.stop_button.image = image self.stop_button.grid(row=0, column=2, sticky=tk.EW, padx=PADX) @@ -134,7 +134,7 @@ class MobilityPlayerDialog(Dialog): session_id = self.app.core.session.id try: self.app.core.client.mobility_action( - session_id, self.node.id, MobilityAction.START + session_id, self.node.id, MobilityAction.START.value ) except grpc.RpcError as e: self.app.show_grpc_exception("Mobility Error", e) @@ -144,7 +144,7 @@ class MobilityPlayerDialog(Dialog): session_id = self.app.core.session.id try: self.app.core.client.mobility_action( - session_id, self.node.id, MobilityAction.PAUSE + session_id, self.node.id, MobilityAction.PAUSE.value ) except grpc.RpcError as e: self.app.show_grpc_exception("Mobility Error", e) @@ -154,7 +154,7 @@ class MobilityPlayerDialog(Dialog): session_id = self.app.core.session.id try: self.app.core.client.mobility_action( - session_id, self.node.id, MobilityAction.STOP + session_id, self.node.id, MobilityAction.STOP.value ) except grpc.RpcError as e: self.app.show_grpc_exception("Mobility Error", e) diff --git a/daemon/core/gui/dialogs/nodeconfig.py b/daemon/core/gui/dialogs/nodeconfig.py index 162696d4..de591631 100644 --- a/daemon/core/gui/dialogs/nodeconfig.py +++ b/daemon/core/gui/dialogs/nodeconfig.py @@ -2,23 +2,21 @@ import logging import tkinter as tk from functools import partial from tkinter import messagebox, ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Optional import netaddr from PIL.ImageTk import PhotoImage from core.api.grpc.wrappers import Interface, Node -from core.gui import images -from core.gui import nodeutils as nutils -from core.gui import validation +from core.gui import nodeutils, validation from core.gui.appconfig import ICONS_PATH from core.gui.dialogs.dialog import Dialog from core.gui.dialogs.emaneconfig import EmaneModelDialog +from core.gui.images import Images +from core.gui.nodeutils import NodeUtils from core.gui.themes import FRAME_PAD, PADX, PADY from core.gui.widgets import ListboxScroll, image_chooser -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application from core.gui.graph.node import CanvasNode @@ -190,7 +188,7 @@ class NodeConfigDialog(Dialog): if self.node.server: server = self.node.server self.server: tk.StringVar = tk.StringVar(value=server) - self.ifaces: dict[int, InterfaceData] = {} + self.ifaces: Dict[int, InterfaceData] = {} self.draw() def draw(self) -> None: @@ -227,22 +225,27 @@ class NodeConfigDialog(Dialog): row += 1 # node type field - if nutils.is_model(self.node): + if NodeUtils.is_model_node(self.node.type): label = ttk.Label(frame, text="Type") label.grid(row=row, column=0, sticky=tk.EW, padx=PADX, pady=PADY) - entry = ttk.Entry(frame, textvariable=self.type, state=tk.DISABLED) - entry.grid(row=row, column=1, sticky=tk.EW) + combobox = ttk.Combobox( + frame, + textvariable=self.type, + values=list(NodeUtils.NODE_MODELS), + state=combo_state, + ) + combobox.grid(row=row, column=1, sticky=tk.EW) row += 1 # container image field - if nutils.has_image(self.node.type): + if NodeUtils.is_image_node(self.node.type): label = ttk.Label(frame, text="Image") label.grid(row=row, column=0, sticky=tk.EW, padx=PADX, pady=PADY) entry = ttk.Entry(frame, textvariable=self.container_image, state=state) entry.grid(row=row, column=1, sticky=tk.EW) row += 1 - if nutils.is_container(self.node): + if NodeUtils.is_container_node(self.node.type): # server frame.grid(sticky=tk.EW) frame.columnconfigure(1, weight=1) @@ -256,21 +259,21 @@ class NodeConfigDialog(Dialog): combobox.grid(row=row, column=1, sticky=tk.EW) row += 1 - if nutils.is_rj45(self.node): - ifaces = self.app.core.client.get_ifaces() - logger.debug("host machine available interfaces: %s", ifaces) - ifaces_scroll = ListboxScroll(frame) - ifaces_scroll.listbox.config(state=state) - ifaces_scroll.grid( + if NodeUtils.is_rj45_node(self.node.type): + response = self.app.core.client.get_ifaces() + logging.debug("host machine available interfaces: %s", response) + ifaces = ListboxScroll(frame) + ifaces.listbox.config(state=state) + ifaces.grid( row=row, column=0, columnspan=2, sticky=tk.EW, padx=PADX, pady=PADY ) - for inf in sorted(ifaces): - ifaces_scroll.listbox.insert(tk.END, inf) + for inf in sorted(response.ifaces[:]): + ifaces.listbox.insert(tk.END, inf) row += 1 - ifaces_scroll.listbox.bind("<>", self.iface_select) + ifaces.listbox.bind("<>", self.iface_select) # interfaces - if nutils.is_container(self.node): + if self.canvas_node.ifaces: self.draw_ifaces() self.draw_spacer() @@ -293,9 +296,10 @@ class NodeConfigDialog(Dialog): emane_node = self.canvas_node.has_emane_link(iface.id) if emane_node: emane_model = emane_node.emane.split("_")[1] - command = partial(self.click_emane_config, emane_model, iface.id) button = ttk.Button( - tab, text=f"Configure EMANE {emane_model}", command=command + tab, + text=f"Configure EMANE {emane_model}", + command=lambda: self.click_emane_config(emane_model, iface.id), ) button.grid(row=row, sticky=tk.EW, columnspan=3, pady=PADY) row += 1 @@ -361,14 +365,13 @@ class NodeConfigDialog(Dialog): button.grid(row=0, column=1, sticky=tk.EW) def click_emane_config(self, emane_model: str, iface_id: int) -> None: - logger.info("configuring emane: %s - %s", emane_model, iface_id) dialog = EmaneModelDialog(self, self.app, self.node, emane_model, iface_id) dialog.show() def click_icon(self) -> None: file_path = image_chooser(self, ICONS_PATH) if file_path: - self.image = images.from_file(file_path, width=images.NODE_SIZE) + self.image = Images.create(file_path, nodeutils.ICON_SIZE) self.image_button.config(image=self.image) self.image_file = file_path @@ -377,10 +380,10 @@ class NodeConfigDialog(Dialog): # update core node self.node.name = self.name.get() - if nutils.has_image(self.node.type): + if NodeUtils.is_image_node(self.node.type): self.node.image = self.container_image.get() server = self.server.get() - if nutils.is_container(self.node): + if NodeUtils.is_container_node(self.node.type): if server == DEFAULT_SERVER: self.node.server = None else: diff --git a/daemon/core/gui/dialogs/nodeconfigservice.py b/daemon/core/gui/dialogs/nodeconfigservice.py index ce718080..2141b3dc 100644 --- a/daemon/core/gui/dialogs/nodeconfigservice.py +++ b/daemon/core/gui/dialogs/nodeconfigservice.py @@ -4,7 +4,7 @@ core node services import logging import tkinter as tk from tkinter import messagebox, ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Optional, Set from core.api.grpc.wrappers import Node from core.gui.dialogs.configserviceconfig import ConfigServiceConfigDialog @@ -12,15 +12,13 @@ from core.gui.dialogs.dialog import Dialog from core.gui.themes import FRAME_PAD, PADX, PADY from core.gui.widgets import CheckboxList, ListboxScroll -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application class NodeConfigServiceDialog(Dialog): def __init__( - self, app: "Application", node: Node, services: set[str] = None + self, app: "Application", node: Node, services: Set[str] = None ) -> None: title = f"{node.name} Config Services" super().__init__(app, title) @@ -30,8 +28,7 @@ class NodeConfigServiceDialog(Dialog): self.current: Optional[ListboxScroll] = None if services is None: services = set(node.config_services) - self.current_services: set[str] = services - self.protocol("WM_DELETE_WINDOW", self.click_cancel) + self.current_services: Set[str] = services self.draw() def draw(self) -> None: @@ -103,7 +100,6 @@ class NodeConfigServiceDialog(Dialog): self.current_services.add(name) elif not var.get() and name in self.current_services: self.current_services.remove(name) - self.node.config_service_configs.pop(name, None) self.draw_current_services() self.node.config_services = self.current_services.copy() @@ -135,7 +131,7 @@ class NodeConfigServiceDialog(Dialog): def click_save(self) -> None: self.node.config_services = self.current_services.copy() - logger.info("saved node config services: %s", self.node.config_services) + logging.info("saved node config services: %s", self.node.config_services) self.destroy() def click_cancel(self) -> None: @@ -148,7 +144,6 @@ class NodeConfigServiceDialog(Dialog): service = self.current.listbox.get(cur[0]) self.current.listbox.delete(cur[0]) self.current_services.remove(service) - self.node.config_service_configs.pop(service, None) for checkbutton in self.services.frame.winfo_children(): if checkbutton["text"] == service: checkbutton.invoke() diff --git a/daemon/core/gui/dialogs/nodeservice.py b/daemon/core/gui/dialogs/nodeservice.py index 66e83fa4..09732e73 100644 --- a/daemon/core/gui/dialogs/nodeservice.py +++ b/daemon/core/gui/dialogs/nodeservice.py @@ -3,7 +3,7 @@ core node services """ import tkinter as tk from tkinter import messagebox, ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Optional, Set from core.api.grpc.wrappers import Node from core.gui.dialogs.dialog import Dialog @@ -17,15 +17,14 @@ if TYPE_CHECKING: class NodeServiceDialog(Dialog): def __init__(self, app: "Application", node: Node) -> None: - title = f"{node.name} Services (Deprecated)" + title = f"{node.name} Services" super().__init__(app, title) self.node: Node = node self.groups: Optional[ListboxScroll] = None self.services: Optional[CheckboxList] = None self.current: Optional[ListboxScroll] = None services = set(node.services) - self.current_services: set[str] = services - self.protocol("WM_DELETE_WINDOW", self.click_cancel) + self.current_services: Set[str] = services self.draw() def draw(self) -> None: @@ -78,7 +77,7 @@ class NodeServiceDialog(Dialog): button.grid(row=0, column=1, sticky=tk.EW, padx=PADX) button = ttk.Button(frame, text="Remove", command=self.click_remove) button.grid(row=0, column=2, sticky=tk.EW, padx=PADX) - button = ttk.Button(frame, text="Cancel", command=self.click_cancel) + button = ttk.Button(frame, text="Cancel", command=self.destroy) button.grid(row=0, column=3, sticky=tk.EW) # trigger group change @@ -99,8 +98,6 @@ class NodeServiceDialog(Dialog): self.current_services.add(name) elif not var.get() and name in self.current_services: self.current_services.remove(name) - self.node.service_configs.pop(name, None) - self.node.service_file_configs.pop(name, None) self.current.listbox.delete(0, tk.END) for name in sorted(self.current_services): self.current.listbox.insert(tk.END, name) @@ -128,9 +125,6 @@ class NodeServiceDialog(Dialog): "Service Configuration", "Select a service to configure", parent=self ) - def click_cancel(self) -> None: - self.destroy() - def click_save(self) -> None: self.node.services = self.current_services.copy() self.destroy() @@ -141,8 +135,6 @@ class NodeServiceDialog(Dialog): service = self.current.listbox.get(cur[0]) self.current.listbox.delete(cur[0]) self.current_services.remove(service) - self.node.service_configs.pop(service, None) - self.node.service_file_configs.pop(service, None) for checkbutton in self.services.frame.winfo_children(): if checkbutton["text"] == service: checkbutton.invoke() diff --git a/daemon/core/gui/dialogs/preferences.py b/daemon/core/gui/dialogs/preferences.py index 4a6a1c08..d0c58dfa 100644 --- a/daemon/core/gui/dialogs/preferences.py +++ b/daemon/core/gui/dialogs/preferences.py @@ -9,8 +9,6 @@ from core.gui.dialogs.dialog import Dialog from core.gui.themes import FRAME_PAD, PADX, PADY, scale_fonts from core.gui.validation import LARGEST_SCALE, SMALLEST_SCALE -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -104,7 +102,7 @@ class PreferencesDialog(Dialog): def theme_change(self, event: tk.Event) -> None: theme = self.theme.get() - logger.info("changing theme: %s", theme) + logging.info("changing theme: %s", theme) self.app.style.theme_use(theme) def click_save(self) -> None: @@ -136,8 +134,7 @@ class PreferencesDialog(Dialog): # scale toolbar and canvas items self.app.toolbar.scale() - for canvas in self.app.manager.all(): - canvas.scale_graph() + self.app.canvas.scale_graph() def adjust_scale(self, arg1: str, arg2: str, arg3: str) -> None: scale_value = self.gui_scale.get() diff --git a/daemon/core/gui/dialogs/runtool.py b/daemon/core/gui/dialogs/runtool.py index 75789893..45e21182 100644 --- a/daemon/core/gui/dialogs/runtool.py +++ b/daemon/core/gui/dialogs/runtool.py @@ -1,9 +1,9 @@ import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Optional -from core.gui import nodeutils as nutils from core.gui.dialogs.dialog import Dialog +from core.gui.nodeutils import NodeUtils from core.gui.themes import FRAME_PAD, PADX, PADY from core.gui.widgets import CodeText, ListboxScroll @@ -17,7 +17,7 @@ class RunToolDialog(Dialog): self.cmd: tk.StringVar = tk.StringVar(value="ps ax") self.result: Optional[CodeText] = None self.node_list: Optional[ListboxScroll] = None - self.executable_nodes: dict[str, int] = {} + self.executable_nodes: Dict[str, int] = {} self.store_nodes() self.draw() @@ -26,7 +26,7 @@ class RunToolDialog(Dialog): store all CORE nodes (nodes that execute commands) from all existing nodes """ for node in self.app.core.session.nodes.values(): - if nutils.is_container(node): + if NodeUtils.is_container_node(node.type): self.executable_nodes[node.name] = node.id def draw(self) -> None: @@ -106,8 +106,10 @@ class RunToolDialog(Dialog): for selection in self.node_list.listbox.curselection(): node_name = self.node_list.listbox.get(selection) node_id = self.executable_nodes[node_name] - _, output = self.app.core.client.node_command( + response = self.app.core.client.node_command( self.app.core.session.id, node_id, command ) - self.result.text.insert(tk.END, f"> {node_name} > {command}:\n{output}\n") + self.result.text.insert( + tk.END, f"> {node_name} > {command}:\n{response.output}\n" + ) self.result.text.config(state=tk.DISABLED) diff --git a/daemon/core/gui/dialogs/serviceconfig.py b/daemon/core/gui/dialogs/serviceconfig.py index 5eec7faf..541a490e 100644 --- a/daemon/core/gui/dialogs/serviceconfig.py +++ b/daemon/core/gui/dialogs/serviceconfig.py @@ -1,22 +1,19 @@ import logging +import os import tkinter as tk -from pathlib import Path -from tkinter import filedialog, messagebox, ttk -from typing import TYPE_CHECKING, Optional +from tkinter import filedialog, ttk +from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple import grpc from PIL.ImageTk import PhotoImage from core.api.grpc.wrappers import Node, NodeServiceData, ServiceValidationMode -from core.gui import images from core.gui.dialogs.copyserviceconfig import CopyServiceConfigDialog from core.gui.dialogs.dialog import Dialog -from core.gui.images import ImageEnum +from core.gui.images import ImageEnum, Images from core.gui.themes import FRAME_PAD, PADX, PADY from core.gui.widgets import CodeText, ListboxScroll -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application from core.gui.coreclient import CoreClient @@ -28,33 +25,33 @@ class ServiceConfigDialog(Dialog): def __init__( self, master: tk.BaseWidget, app: "Application", service_name: str, node: Node ) -> None: - title = f"{service_name} Service (Deprecated)" + title = f"{service_name} Service" super().__init__(app, title, master=master) self.core: "CoreClient" = app.core self.node: Node = node self.service_name: str = service_name self.radiovar: tk.IntVar = tk.IntVar(value=2) self.metadata: str = "" - self.filenames: list[str] = [] - self.dependencies: list[str] = [] - self.executables: list[str] = [] - self.startup_commands: list[str] = [] - self.validation_commands: list[str] = [] - self.shutdown_commands: list[str] = [] - self.default_startup: list[str] = [] - self.default_validate: list[str] = [] - self.default_shutdown: list[str] = [] + self.filenames: List[str] = [] + self.dependencies: List[str] = [] + self.executables: List[str] = [] + self.startup_commands: List[str] = [] + self.validation_commands: List[str] = [] + self.shutdown_commands: List[str] = [] + self.default_startup: List[str] = [] + self.default_validate: List[str] = [] + self.default_shutdown: List[str] = [] self.validation_mode: Optional[ServiceValidationMode] = None self.validation_time: Optional[int] = None self.validation_period: Optional[float] = None self.directory_entry: Optional[ttk.Entry] = None - self.default_directories: list[str] = [] - self.temp_directories: list[str] = [] - self.documentnew_img: PhotoImage = self.app.get_enum_icon( - ImageEnum.DOCUMENTNEW, width=ICON_SIZE + self.default_directories: List[str] = [] + self.temp_directories: List[str] = [] + self.documentnew_img: PhotoImage = self.app.get_icon( + ImageEnum.DOCUMENTNEW, ICON_SIZE ) - self.editdelete_img: PhotoImage = self.app.get_enum_icon( - ImageEnum.EDITDELETE, width=ICON_SIZE + self.editdelete_img: PhotoImage = self.app.get_icon( + ImageEnum.EDITDELETE, ICON_SIZE ) self.notebook: Optional[ttk.Notebook] = None self.metadata_entry: Optional[ttk.Entry] = None @@ -67,10 +64,10 @@ class ServiceConfigDialog(Dialog): self.validation_mode_entry: Optional[ttk.Entry] = None self.service_file_data: Optional[CodeText] = None self.validation_period_entry: Optional[ttk.Entry] = None - self.original_service_files: dict[str, str] = {} + self.original_service_files: Dict[str, str] = {} self.default_config: Optional[NodeServiceData] = None - self.temp_service_files: dict[str, str] = {} - self.modified_files: set[str] = set() + self.temp_service_files: Dict[str, str] = {} + self.modified_files: Set[str] = set() self.has_error: bool = False self.load() if not self.has_error: @@ -78,7 +75,7 @@ class ServiceConfigDialog(Dialog): def load(self) -> None: try: - self.core.start_session(definition=True) + self.app.core.create_nodes_and_links() default_config = self.app.core.get_node_service( self.node.id, self.service_name ) @@ -182,7 +179,7 @@ class ServiceConfigDialog(Dialog): button.grid(row=0, column=0, sticky=tk.W, padx=PADX) entry = ttk.Entry(frame, state=tk.DISABLED) entry.grid(row=0, column=1, sticky=tk.EW, padx=PADX) - image = images.from_enum(ImageEnum.FILEOPEN, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.FILEOPEN, 16) button = ttk.Button(frame, image=image) button.image = image button.grid(row=0, column=2) @@ -197,11 +194,11 @@ class ServiceConfigDialog(Dialog): value=2, ) button.grid(row=0, column=0, sticky=tk.EW) - image = images.from_enum(ImageEnum.FILEOPEN, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.FILEOPEN, 16) button = ttk.Button(frame, image=image) button.image = image button.grid(row=0, column=1) - image = images.from_enum(ImageEnum.DOCUMENTSAVE, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.DOCUMENTSAVE, 16) button = ttk.Button(frame, image=image) button.image = image button.grid(row=0, column=2) @@ -390,7 +387,7 @@ class ServiceConfigDialog(Dialog): 1.0, "end" ) else: - logger.debug("file already existed") + logging.debug("file already existed") def delete_filename(self) -> None: cbb = self.filename_combobox @@ -449,31 +446,36 @@ class ServiceConfigDialog(Dialog): self.current_service_color("") self.destroy() return - files = set(self.filenames) - if ( - self.is_custom_command() - or self.has_new_files() - or self.is_custom_directory() - ): - startup, validate, shutdown = self.get_commands() - files = set(self.filename_combobox["values"]) - service_data = NodeServiceData( - configs=list(files), - dirs=self.temp_directories, - startup=startup, - validate=validate, - shutdown=shutdown, - ) - logger.info("setting service data: %s", service_data) - self.node.service_configs[self.service_name] = service_data - for file in self.modified_files: - if file not in files: - continue - file_configs = self.node.service_file_configs.setdefault( - self.service_name, {} - ) - file_configs[file] = self.temp_service_files[file] - self.current_service_color("green") + + try: + if ( + self.is_custom_command() + or self.has_new_files() + or self.is_custom_directory() + ): + startup, validate, shutdown = self.get_commands() + config = self.core.set_node_service( + self.node.id, + self.service_name, + dirs=self.temp_directories, + files=list(self.filename_combobox["values"]), + startups=startup, + validations=validate, + shutdowns=shutdown, + ) + self.node.service_configs[self.service_name] = config + for file in self.modified_files: + file_configs = self.node.service_file_configs.setdefault( + self.service_name, {} + ) + file_configs[file] = self.temp_service_files[file] + # TODO: check if this is really needed + self.app.core.set_node_service_file( + self.node.id, self.service_name, file, self.temp_service_files[file] + ) + self.current_service_color("green") + except grpc.RpcError as e: + self.app.show_grpc_exception("Save Service Config Error", e) self.destroy() def display_service_file_data(self, event: tk.Event) -> None: @@ -558,13 +560,13 @@ class ServiceConfigDialog(Dialog): @classmethod def append_commands( - cls, commands: list[str], listbox: tk.Listbox, to_add: list[str] + cls, commands: List[str], listbox: tk.Listbox, to_add: List[str] ) -> None: for cmd in to_add: commands.append(cmd) listbox.insert(tk.END, cmd) - def get_commands(self) -> tuple[list[str], list[str], list[str]]: + def get_commands(self) -> Tuple[List[str], List[str], List[str]]: startup = self.startup_commands_listbox.get(0, "end") shutdown = self.shutdown_commands_listbox.get(0, "end") validate = self.validate_commands_listbox.get(0, "end") @@ -576,13 +578,11 @@ class ServiceConfigDialog(Dialog): self.directory_entry.insert("end", d) def add_directory(self) -> None: - directory = Path(self.directory_entry.get()) - if directory.is_absolute(): - if str(directory) not in self.temp_directories: - self.dir_list.listbox.insert("end", directory) - self.temp_directories.append(str(directory)) - else: - messagebox.showerror("Add Directory", "Path must be absolute!", parent=self) + d = self.directory_entry.get() + if os.path.isdir(d): + if d not in self.temp_directories: + self.dir_list.listbox.insert("end", d) + self.temp_directories.append(d) def remove_directory(self) -> None: d = self.directory_entry.get() @@ -593,7 +593,7 @@ class ServiceConfigDialog(Dialog): i = dirs.index(d) self.dir_list.listbox.delete(i) except ValueError: - logger.debug("directory is not in the list") + logging.debug("directory is not in the list") self.directory_entry.delete(0, "end") def directory_select(self, event) -> None: diff --git a/daemon/core/gui/dialogs/sessionoptions.py b/daemon/core/gui/dialogs/sessionoptions.py index 28d780dc..4b086d67 100644 --- a/daemon/core/gui/dialogs/sessionoptions.py +++ b/daemon/core/gui/dialogs/sessionoptions.py @@ -1,14 +1,15 @@ import logging import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Optional +import grpc + +from core.api.grpc.wrappers import ConfigOption from core.gui.dialogs.dialog import Dialog from core.gui.themes import PADX, PADY from core.gui.widgets import ConfigFrame -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -18,15 +19,25 @@ class SessionOptionsDialog(Dialog): super().__init__(app, "Session Options") self.config_frame: Optional[ConfigFrame] = None self.has_error: bool = False + self.config: Dict[str, ConfigOption] = self.get_config() self.enabled: bool = not self.app.core.is_runtime() if not self.has_error: self.draw() + def get_config(self) -> Dict[str, ConfigOption]: + try: + session_id = self.app.core.session.id + response = self.app.core.client.get_session_options(session_id) + return ConfigOption.from_dict(response.config) + except grpc.RpcError as e: + self.app.show_grpc_exception("Get Session Options Error", e) + self.has_error = True + self.destroy() + def draw(self) -> None: self.top.columnconfigure(0, weight=1) self.top.rowconfigure(0, weight=1) - options = self.app.core.session.options - self.config_frame = ConfigFrame(self.top, self.app, options, self.enabled) + self.config_frame = ConfigFrame(self.top, self.app, self.config, self.enabled) self.config_frame.draw_config() self.config_frame.grid(sticky=tk.NSEW, pady=PADY) @@ -42,6 +53,10 @@ class SessionOptionsDialog(Dialog): def save(self) -> None: config = self.config_frame.parse_config() - for key, value in config.items(): - self.app.core.session.options[key].value = value + try: + session_id = self.app.core.session.id + response = self.app.core.client.set_session_options(session_id, config) + logging.info("saved session config: %s", response) + except grpc.RpcError as e: + self.app.show_grpc_exception("Set Session Options Error", e) self.destroy() diff --git a/daemon/core/gui/dialogs/sessions.py b/daemon/core/gui/dialogs/sessions.py index 3ca4fa63..71a33fd6 100644 --- a/daemon/core/gui/dialogs/sessions.py +++ b/daemon/core/gui/dialogs/sessions.py @@ -1,19 +1,16 @@ import logging import tkinter as tk from tkinter import messagebox, ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, List, Optional import grpc from core.api.grpc.wrappers import SessionState, SessionSummary -from core.gui import images from core.gui.dialogs.dialog import Dialog -from core.gui.images import ImageEnum +from core.gui.images import ImageEnum, Images from core.gui.task import ProgressTask from core.gui.themes import PADX, PADY -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -25,16 +22,17 @@ class SessionsDialog(Dialog): self.selected_session: Optional[int] = None self.selected_id: Optional[int] = None self.tree: Optional[ttk.Treeview] = None + self.sessions: List[SessionSummary] = self.get_sessions() self.connect_button: Optional[ttk.Button] = None self.delete_button: Optional[ttk.Button] = None self.protocol("WM_DELETE_WINDOW", self.on_closing) self.draw() - def get_sessions(self) -> list[SessionSummary]: + def get_sessions(self) -> List[SessionSummary]: try: - sessions = self.app.core.client.get_sessions() - logger.info("sessions: %s", sessions) - return sorted(sessions, key=lambda x: x.id) + response = self.app.core.client.get_sessions() + logging.info("sessions: %s", response) + return [SessionSummary.from_proto(x) for x in response.sessions] except grpc.RpcError as e: self.app.show_grpc_exception("Get Sessions Error", e) self.destroy() @@ -81,7 +79,15 @@ class SessionsDialog(Dialog): self.tree.heading("state", text="State") self.tree.column("nodes", stretch=tk.YES, anchor="center") self.tree.heading("nodes", text="Node Count") - self.draw_sessions() + + for index, session in enumerate(self.sessions): + state_name = SessionState(session.state).name + self.tree.insert( + "", + tk.END, + text=str(session.id), + values=(session.id, state_name, session.nodes), + ) self.tree.bind("", self.double_click_join) self.tree.bind("<>", self.click_select) @@ -93,31 +99,20 @@ class SessionsDialog(Dialog): xscrollbar.grid(row=1, sticky=tk.EW) self.tree.configure(xscrollcommand=xscrollbar.set) - def draw_sessions(self) -> None: - self.tree.delete(*self.tree.get_children()) - for index, session in enumerate(self.get_sessions()): - state_name = SessionState(session.state).name - self.tree.insert( - "", - tk.END, - text=str(session.id), - values=(session.id, state_name, session.nodes), - ) - def draw_buttons(self) -> None: frame = ttk.Frame(self.top) for i in range(4): frame.columnconfigure(i, weight=1) frame.grid(sticky=tk.EW) - image = images.from_enum(ImageEnum.DOCUMENTNEW, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.DOCUMENTNEW, 16) b = ttk.Button( frame, image=image, text="New", compound=tk.LEFT, command=self.click_new ) b.image = image b.grid(row=0, padx=PADX, sticky=tk.EW) - image = images.from_enum(ImageEnum.FILEOPEN, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.FILEOPEN, 16) self.connect_button = ttk.Button( frame, image=image, @@ -129,7 +124,7 @@ class SessionsDialog(Dialog): self.connect_button.image = image self.connect_button.grid(row=0, column=1, padx=PADX, sticky=tk.EW) - image = images.from_enum(ImageEnum.DELETE, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.DELETE, 16) self.delete_button = ttk.Button( frame, image=image, @@ -141,7 +136,7 @@ class SessionsDialog(Dialog): self.delete_button.image = image self.delete_button.grid(row=0, column=2, padx=PADX, sticky=tk.EW) - image = images.from_enum(ImageEnum.CANCEL, width=images.BUTTON_SIZE) + image = Images.get(ImageEnum.CANCEL, 16) if self.is_start_app: b = ttk.Button( frame, @@ -177,7 +172,7 @@ class SessionsDialog(Dialog): self.selected_id = None self.delete_button.config(state=tk.DISABLED) self.connect_button.config(state=tk.DISABLED) - logger.debug("selected session: %s", self.selected_session) + logging.debug("selected session: %s", self.selected_session) def click_connect(self) -> None: if not self.selected_session: @@ -201,21 +196,12 @@ class SessionsDialog(Dialog): def click_delete(self) -> None: if not self.selected_session: return - logger.info("click delete session: %s", self.selected_session) + logging.debug("delete session: %s", self.selected_session) self.tree.delete(self.selected_id) self.app.core.delete_session(self.selected_session) - session_id = None - if self.app.core.session: - session_id = self.app.core.session.id - if self.selected_session == session_id: - self.app.core.session = None - sessions = self.get_sessions() - if not sessions: - self.app.core.create_new_session() - self.draw_sessions() - else: - session_id = sessions[0].id - self.app.core.join_session(session_id) + if self.selected_session == self.app.core.session.id: + self.click_new() + self.destroy() self.click_select() def click_exit(self) -> None: diff --git a/daemon/core/gui/dialogs/shapemod.py b/daemon/core/gui/dialogs/shapemod.py index db19ff1a..255092ec 100644 --- a/daemon/core/gui/dialogs/shapemod.py +++ b/daemon/core/gui/dialogs/shapemod.py @@ -3,7 +3,7 @@ shape input dialog """ import tkinter as tk from tkinter import font, ttk -from typing import TYPE_CHECKING, Optional, Union +from typing import TYPE_CHECKING, List, Optional, Union from core.gui.dialogs.colorpicker import ColorPickerDialog from core.gui.dialogs.dialog import Dialog @@ -16,8 +16,8 @@ if TYPE_CHECKING: from core.gui.graph.graph import CanvasGraph from core.gui.graph.shape import Shape -FONT_SIZES: list[int] = [8, 9, 10, 11, 12, 14, 16, 18, 20, 22, 24, 26, 28, 36, 48, 72] -BORDER_WIDTH: list[int] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] +FONT_SIZES: List[int] = [8, 9, 10, 11, 12, 14, 16, 18, 20, 22, 24, 26, 28, 36, 48, 72] +BORDER_WIDTH: List[int] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] class ShapeDialog(Dialog): @@ -27,7 +27,7 @@ class ShapeDialog(Dialog): else: title = "Add Text" super().__init__(app, title) - self.canvas: "CanvasGraph" = app.manager.current() + self.canvas: "CanvasGraph" = app.canvas self.fill: Optional[ttk.Label] = None self.border: Optional[ttk.Label] = None self.shape: "Shape" = shape @@ -168,7 +168,7 @@ class ShapeDialog(Dialog): self.add_text() self.destroy() - def make_font(self) -> list[Union[int, str]]: + def make_font(self) -> List[Union[int, str]]: """ create font for text or shape label """ diff --git a/daemon/core/gui/dialogs/throughput.py b/daemon/core/gui/dialogs/throughput.py index 493d4da4..0b59a6ac 100644 --- a/daemon/core/gui/dialogs/throughput.py +++ b/daemon/core/gui/dialogs/throughput.py @@ -7,7 +7,7 @@ from typing import TYPE_CHECKING, Optional from core.gui.dialogs.colorpicker import ColorPickerDialog from core.gui.dialogs.dialog import Dialog -from core.gui.graph.manager import CanvasManager +from core.gui.graph.graph import CanvasGraph from core.gui.themes import FRAME_PAD, PADX, PADY if TYPE_CHECKING: @@ -17,16 +17,16 @@ if TYPE_CHECKING: class ThroughputDialog(Dialog): def __init__(self, app: "Application") -> None: super().__init__(app, "Throughput Config") - self.manager: CanvasManager = app.manager + self.canvas: CanvasGraph = app.canvas self.show_throughput: tk.IntVar = tk.IntVar(value=1) self.exponential_weight: tk.IntVar = tk.IntVar(value=1) self.transmission: tk.IntVar = tk.IntVar(value=1) self.reception: tk.IntVar = tk.IntVar(value=1) self.threshold: tk.DoubleVar = tk.DoubleVar( - value=self.manager.throughput_threshold + value=self.canvas.throughput_threshold ) - self.width: tk.IntVar = tk.IntVar(value=self.manager.throughput_width) - self.color: str = self.manager.throughput_color + self.width: tk.IntVar = tk.IntVar(value=self.canvas.throughput_width) + self.color: str = self.canvas.throughput_color self.color_button: Optional[tk.Button] = None self.top.columnconfigure(0, weight=1) self.draw() @@ -106,7 +106,7 @@ class ThroughputDialog(Dialog): self.color_button.config(bg=self.color, text=self.color, bd=0) def click_save(self) -> None: - self.manager.throughput_threshold = self.threshold.get() - self.manager.throughput_width = self.width.get() - self.manager.throughput_color = self.color + self.canvas.throughput_threshold = self.threshold.get() + self.canvas.throughput_width = self.width.get() + self.canvas.throughput_color = self.color self.destroy() diff --git a/daemon/core/gui/dialogs/wirelessconfig.py b/daemon/core/gui/dialogs/wirelessconfig.py deleted file mode 100644 index b04fbd2c..00000000 --- a/daemon/core/gui/dialogs/wirelessconfig.py +++ /dev/null @@ -1,55 +0,0 @@ -import tkinter as tk -from tkinter import ttk -from typing import TYPE_CHECKING, Optional - -import grpc - -from core.api.grpc.wrappers import ConfigOption, Node -from core.gui.dialogs.dialog import Dialog -from core.gui.themes import PADX, PADY -from core.gui.widgets import ConfigFrame - -if TYPE_CHECKING: - from core.gui.app import Application - from core.gui.graph.node import CanvasNode - - -class WirelessConfigDialog(Dialog): - def __init__(self, app: "Application", canvas_node: "CanvasNode"): - super().__init__(app, f"Wireless Configuration - {canvas_node.core_node.name}") - self.node: Node = canvas_node.core_node - self.config_frame: Optional[ConfigFrame] = None - self.config: dict[str, ConfigOption] = {} - try: - config = self.node.wireless_config - if not config: - config = self.app.core.get_wireless_config(self.node.id) - self.config: dict[str, ConfigOption] = config - self.draw() - except grpc.RpcError as e: - self.app.show_grpc_exception("Wireless Config Error", e) - self.has_error: bool = True - self.destroy() - - def draw(self) -> None: - self.top.columnconfigure(0, weight=1) - self.top.rowconfigure(0, weight=1) - self.config_frame = ConfigFrame(self.top, self.app, self.config) - self.config_frame.draw_config() - self.config_frame.grid(sticky=tk.NSEW, pady=PADY) - self.draw_buttons() - - def draw_buttons(self) -> None: - frame = ttk.Frame(self.top) - frame.grid(sticky=tk.EW) - for i in range(2): - frame.columnconfigure(i, weight=1) - button = ttk.Button(frame, text="Apply", command=self.click_apply) - button.grid(row=0, column=0, padx=PADX, sticky=tk.EW) - button = ttk.Button(frame, text="Cancel", command=self.destroy) - button.grid(row=0, column=1, sticky=tk.EW) - - def click_apply(self) -> None: - self.config_frame.parse_config() - self.node.wireless_config = self.config - self.destroy() diff --git a/daemon/core/gui/dialogs/wlanconfig.py b/daemon/core/gui/dialogs/wlanconfig.py index c382d3c8..05362cc6 100644 --- a/daemon/core/gui/dialogs/wlanconfig.py +++ b/daemon/core/gui/dialogs/wlanconfig.py @@ -1,6 +1,6 @@ import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, Optional import grpc @@ -21,19 +21,19 @@ RANGE_WIDTH: int = 3 class WlanConfigDialog(Dialog): def __init__(self, app: "Application", canvas_node: "CanvasNode") -> None: super().__init__(app, f"{canvas_node.core_node.name} WLAN Configuration") - self.canvas: "CanvasGraph" = app.manager.current() + self.canvas: "CanvasGraph" = app.canvas self.canvas_node: "CanvasNode" = canvas_node self.node: Node = canvas_node.core_node self.config_frame: Optional[ConfigFrame] = None self.range_entry: Optional[ttk.Entry] = None self.has_error: bool = False - self.ranges: dict[int, int] = {} + self.ranges: Dict[int, int] = {} self.positive_int: int = self.app.master.register(self.validate_and_update) try: config = self.node.wlan_config if not config: config = self.app.core.get_wlan_config(self.node.id) - self.config: dict[str, ConfigOption] = config + self.config: Dict[str, ConfigOption] = config self.init_draw_range() self.draw() except grpc.RpcError as e: diff --git a/daemon/core/gui/frames/link.py b/daemon/core/gui/frames/link.py index bde0aec8..086f7ca8 100644 --- a/daemon/core/gui/frames/link.py +++ b/daemon/core/gui/frames/link.py @@ -79,13 +79,15 @@ class WirelessEdgeInfoFrame(InfoFrameBase): def draw(self) -> None: link = self.edge.link - src_node = self.edge.src.core_node - dst_node = self.edge.dst.core_node + src_canvas_node = self.app.canvas.nodes[self.edge.src] + src_node = src_canvas_node.core_node + dst_canvas_node = self.app.canvas.nodes[self.edge.dst] + dst_node = dst_canvas_node.core_node # find interface for each node connected to network net_id = link.network_id - iface1 = get_iface(self.edge.src, net_id) - iface2 = get_iface(self.edge.dst, net_id) + iface1 = get_iface(src_canvas_node, net_id) + iface2 = get_iface(dst_canvas_node, net_id) frame = DetailsFrame(self) frame.grid(sticky=tk.EW) diff --git a/daemon/core/gui/frames/node.py b/daemon/core/gui/frames/node.py index afd03735..5508784d 100644 --- a/daemon/core/gui/frames/node.py +++ b/daemon/core/gui/frames/node.py @@ -2,8 +2,8 @@ import tkinter as tk from typing import TYPE_CHECKING from core.api.grpc.wrappers import NodeType -from core.gui import nodeutils as nutils from core.gui.frames.base import DetailsFrame, InfoFrameBase +from core.gui.nodeutils import NodeUtils if TYPE_CHECKING: from core.gui.app import Application @@ -20,21 +20,21 @@ class NodeInfoFrame(InfoFrameBase): node = self.canvas_node.core_node frame = DetailsFrame(self) frame.grid(sticky=tk.EW) - frame.add_detail("ID", str(node.id)) + frame.add_detail("ID", node.id) frame.add_detail("Name", node.name) - if nutils.is_model(node): + if NodeUtils.is_model_node(node.type): frame.add_detail("Type", node.model) - if nutils.is_container(node): + if NodeUtils.is_container_node(node.type): for index, service in enumerate(sorted(node.services)): if index == 0: frame.add_detail("Services", service) else: frame.add_detail("", service) if node.type == NodeType.EMANE: - emane = "".join(node.emane.split("_")[1:]) + emane = node.emane.split("_")[1:] frame.add_detail("EMANE", emane) - if nutils.has_image(node.type): + if NodeUtils.is_image_node(node.type): frame.add_detail("Image", node.image) - if nutils.is_container(node): + if NodeUtils.is_container_node(node.type): server = node.server if node.server else "localhost" frame.add_detail("Server", server) diff --git a/daemon/core/gui/graph/edges.py b/daemon/core/gui/graph/edges.py index e5a4c97b..216fc7f2 100644 --- a/daemon/core/gui/graph/edges.py +++ b/daemon/core/gui/graph/edges.py @@ -1,23 +1,18 @@ -import functools import logging import math import tkinter as tk -from typing import TYPE_CHECKING, Optional, Union +from typing import TYPE_CHECKING, Optional, Tuple from core.api.grpc.wrappers import Interface, Link -from core.gui import nodeutils, themes +from core.gui import themes from core.gui.dialogs.linkconfig import LinkConfigurationDialog from core.gui.frames.link import EdgeInfoFrame, WirelessEdgeInfoFrame from core.gui.graph import tags +from core.gui.nodeutils import NodeUtils from core.gui.utils import bandwidth_text, delay_jitter_text -logger = logging.getLogger(__name__) - if TYPE_CHECKING: - from core.gui.app import Application from core.gui.graph.graph import CanvasGraph - from core.gui.graph.manager import CanvasManager - from core.gui.graph.node import CanvasNode, ShadowNode TEXT_DISTANCE: int = 60 EDGE_WIDTH: int = 3 @@ -29,40 +24,13 @@ ARC_DISTANCE: int = 50 def create_wireless_token(src: int, dst: int, network: int) -> str: - if src < dst: - node1, node2 = src, dst - else: - node1, node2 = dst, src - return f"{node1}-{node2}-{network}" + return f"{src}-{dst}-{network}" def create_edge_token(link: Link) -> str: iface1_id = link.iface1.id if link.iface1 else 0 iface2_id = link.iface2.id if link.iface2 else 0 - if link.node1_id < link.node2_id: - node1 = link.node1_id - node1_iface = iface1_id - node2 = link.node2_id - node2_iface = iface2_id - else: - node1 = link.node2_id - node1_iface = iface2_id - node2 = link.node1_id - node2_iface = iface1_id - return f"{node1}-{node1_iface}-{node2}-{node2_iface}" - - -def node_label_positions( - src_x: int, src_y: int, dst_x: int, dst_y: int -) -> tuple[tuple[float, float], tuple[float, float]]: - v_x, v_y = dst_x - src_x, dst_y - src_y - v_len = math.sqrt(v_x**2 + v_y**2) - if v_len == 0: - u_x, u_y = 0.0, 0.0 - else: - u_x, u_y = v_x / v_len, v_y / v_len - offset_x, offset_y = TEXT_DISTANCE * u_x, TEXT_DISTANCE * u_y - return (src_x + offset_x, src_y + offset_y), (dst_x - offset_x, dst_y - offset_y) + return f"{link.node1_id}-{iface1_id}-{link.node2_id}-{iface2_id}" def arc_edges(edges) -> None: @@ -97,39 +65,25 @@ def arc_edges(edges) -> None: class Edge: tag: str = tags.EDGE - def __init__( - self, app: "Application", src: "CanvasNode", dst: "CanvasNode" = None - ) -> None: - self.app: "Application" = app - self.manager: CanvasManager = app.manager + def __init__(self, canvas: "CanvasGraph", src: int, dst: int = None) -> None: + self.canvas = canvas self.id: Optional[int] = None - self.id2: Optional[int] = None - self.src: "CanvasNode" = src - self.src_shadow: Optional[ShadowNode] = None - self.dst: Optional["CanvasNode"] = dst - self.dst_shadow: Optional[ShadowNode] = None - self.link: Optional[Link] = None + self.src: int = src + self.dst: int = dst self.arc: int = 0 self.token: Optional[str] = None self.src_label: Optional[int] = None - self.src_label2: Optional[int] = None self.middle_label: Optional[int] = None - self.middle_label2: Optional[int] = None self.dst_label: Optional[int] = None - self.dst_label2: Optional[int] = None self.color: str = EDGE_COLOR self.width: int = EDGE_WIDTH - self.linked_wireless: bool = False - self.hidden: bool = False - if self.dst: - self.linked_wireless = self.src.is_wireless() or self.dst.is_wireless() def scaled_width(self) -> float: - return self.width * self.app.app_scale + return self.width * self.canvas.app.app_scale def _get_arcpoint( - self, src_pos: tuple[float, float], dst_pos: tuple[float, float] - ) -> tuple[float, float]: + self, src_pos: Tuple[float, float], dst_pos: Tuple[float, float] + ) -> Tuple[float, float]: src_x, src_y = src_pos dst_x, dst_y = dst_pos mp_x = (src_x + dst_x) / 2 @@ -147,7 +101,7 @@ class Edge: perp_m = -1 / m b = mp_y - (perp_m * mp_x) # get arc x and y - offset = math.sqrt(self.arc**2 / (1 + (1 / m**2))) + offset = math.sqrt(self.arc ** 2 / (1 + (1 / m ** 2))) arc_x = mp_x if self.arc >= 0: arc_x += offset @@ -156,53 +110,11 @@ class Edge: arc_y = (perp_m * arc_x) + b return arc_x, arc_y - def arc_common_edges(self) -> None: - common_edges = list(self.src.edges & self.dst.edges) - common_edges += list(self.src.wireless_edges & self.dst.wireless_edges) - arc_edges(common_edges) - - def has_shadows(self) -> bool: - # still drawing - if not self.dst: - return False - return self.src.canvas != self.dst.canvas - - def draw(self, state: str) -> None: - if not self.has_shadows(): - dst = self.dst if self.dst else self.src - self.id = self.draw_edge(self.src.canvas, self.src, dst, state) - elif self.linked_wireless: - if self.src.is_wireless(): - self.src_shadow = self.dst.canvas.get_shadow(self.src) - self.id2 = self.draw_edge( - self.dst.canvas, self.src_shadow, self.dst, state - ) - if self.dst.is_wireless(): - self.dst_shadow = self.src.canvas.get_shadow(self.dst) - self.id = self.draw_edge( - self.src.canvas, self.src, self.dst_shadow, state - ) - else: - # draw shadow nodes and 2 lines - self.src_shadow = self.dst.canvas.get_shadow(self.src) - self.dst_shadow = self.src.canvas.get_shadow(self.dst) - self.id = self.draw_edge(self.src.canvas, self.src, self.dst_shadow, state) - self.id2 = self.draw_edge(self.dst.canvas, self.src_shadow, self.dst, state) - self.src.canvas.organize() - if self.has_shadows(): - self.dst.canvas.organize() - - def draw_edge( - self, - canvas: "CanvasGraph", - src: Union["CanvasNode", "ShadowNode"], - dst: Union["CanvasNode", "ShadowNode"], - state: str, - ) -> int: - src_pos = src.position() - dst_pos = dst.position() + def draw( + self, src_pos: Tuple[float, float], dst_pos: Tuple[float, float], state: str + ) -> None: arc_pos = self._get_arcpoint(src_pos, dst_pos) - return canvas.create_line( + self.id = self.canvas.create_line( *src_pos, *arc_pos, *dst_pos, @@ -214,270 +126,112 @@ class Edge: ) def redraw(self) -> None: - self.src.canvas.itemconfig(self.id, width=self.scaled_width(), fill=self.color) - self.move_src() - if self.id2: - self.dst.canvas.itemconfig( - self.id2, width=self.scaled_width(), fill=self.color - ) - self.move_dst() + self.canvas.itemconfig(self.id, width=self.scaled_width(), fill=self.color) + src_x, src_y, _, _, _, _ = self.canvas.coords(self.id) + src_pos = src_x, src_y + self.move_src(src_pos) + + def middle_label_pos(self) -> Tuple[float, float]: + _, _, x, y, _, _ = self.canvas.coords(self.id) + return x, y def middle_label_text(self, text: str) -> None: if self.middle_label is None: - _, _, x, y, _, _ = self.src.canvas.coords(self.id) - self.middle_label = self.src.canvas.create_text( + x, y = self.middle_label_pos() + self.middle_label = self.canvas.create_text( x, y, - font=self.app.edge_font, + font=self.canvas.app.edge_font, text=text, tags=tags.LINK_LABEL, justify=tk.CENTER, - state=self.manager.show_link_labels.state(), + state=self.canvas.show_link_labels.state(), ) - if self.id2: - _, _, x, y, _, _ = self.dst.canvas.coords(self.id2) - self.middle_label2 = self.dst.canvas.create_text( - x, - y, - font=self.app.edge_font, - text=text, - tags=tags.LINK_LABEL, - justify=tk.CENTER, - state=self.manager.show_link_labels.state(), - ) else: - self.src.canvas.itemconfig(self.middle_label, text=text) - if self.middle_label2: - self.dst.canvas.itemconfig(self.middle_label2, text=text) + self.canvas.itemconfig(self.middle_label, text=text) def clear_middle_label(self) -> None: - self.src.canvas.delete(self.middle_label) + self.canvas.delete(self.middle_label) self.middle_label = None - if self.middle_label2: - self.dst.canvas.delete(self.middle_label2) - self.middle_label2 = None + + def node_label_positions(self) -> Tuple[Tuple[float, float], Tuple[float, float]]: + src_x, src_y, _, _, dst_x, dst_y = self.canvas.coords(self.id) + v_x, v_y = dst_x - src_x, dst_y - src_y + v_len = math.sqrt(v_x ** 2 + v_y ** 2) + if v_len == 0: + u_x, u_y = 0.0, 0.0 + else: + u_x, u_y = v_x / v_len, v_y / v_len + offset_x, offset_y = TEXT_DISTANCE * u_x, TEXT_DISTANCE * u_y + return ( + (src_x + offset_x, src_y + offset_y), + (dst_x - offset_x, dst_y - offset_y), + ) def src_label_text(self, text: str) -> None: - if self.src_label is None and self.src_label2 is None: - if self.id: - src_x, src_y, _, _, dst_x, dst_y = self.src.canvas.coords(self.id) - src_pos, _ = node_label_positions(src_x, src_y, dst_x, dst_y) - self.src_label = self.src.canvas.create_text( - *src_pos, - text=text, - justify=tk.CENTER, - font=self.app.edge_font, - tags=tags.LINK_LABEL, - state=self.manager.show_link_labels.state(), - ) - if self.id2: - src_x, src_y, _, _, dst_x, dst_y = self.dst.canvas.coords(self.id2) - src_pos, _ = node_label_positions(src_x, src_y, dst_x, dst_y) - self.src_label2 = self.dst.canvas.create_text( - *src_pos, - text=text, - justify=tk.CENTER, - font=self.app.edge_font, - tags=tags.LINK_LABEL, - state=self.manager.show_link_labels.state(), - ) + if self.src_label is None: + src_pos, _ = self.node_label_positions() + self.src_label = self.canvas.create_text( + *src_pos, + text=text, + justify=tk.CENTER, + font=self.canvas.app.edge_font, + tags=tags.LINK_LABEL, + state=self.canvas.show_link_labels.state(), + ) else: - if self.src_label: - self.src.canvas.itemconfig(self.src_label, text=text) - if self.src_label2: - self.dst.canvas.itemconfig(self.src_label2, text=text) + self.canvas.itemconfig(self.src_label, text=text) def dst_label_text(self, text: str) -> None: - if self.dst_label is None and self.dst_label2 is None: - if self.id: - src_x, src_y, _, _, dst_x, dst_y = self.src.canvas.coords(self.id) - _, dst_pos = node_label_positions(src_x, src_y, dst_x, dst_y) - self.dst_label = self.src.canvas.create_text( - *dst_pos, - text=text, - justify=tk.CENTER, - font=self.app.edge_font, - tags=tags.LINK_LABEL, - state=self.manager.show_link_labels.state(), - ) - if self.id2: - src_x, src_y, _, _, dst_x, dst_y = self.dst.canvas.coords(self.id2) - _, dst_pos = node_label_positions(src_x, src_y, dst_x, dst_y) - self.dst_label2 = self.dst.canvas.create_text( - *dst_pos, - text=text, - justify=tk.CENTER, - font=self.app.edge_font, - tags=tags.LINK_LABEL, - state=self.manager.show_link_labels.state(), - ) + if self.dst_label is None: + _, dst_pos = self.node_label_positions() + self.dst_label = self.canvas.create_text( + *dst_pos, + text=text, + justify=tk.CENTER, + font=self.canvas.app.edge_font, + tags=tags.LINK_LABEL, + state=self.canvas.show_link_labels.state(), + ) else: - if self.dst_label: - self.src.canvas.itemconfig(self.dst_label, text=text) - if self.dst_label2: - self.dst.canvas.itemconfig(self.dst_label2, text=text) + self.canvas.itemconfig(self.dst_label, text=text) - def drawing(self, pos: tuple[float, float]) -> None: - src_x, src_y, _, _, _, _ = self.src.canvas.coords(self.id) - src_pos = src_x, src_y - self.moved(src_pos, pos) - - def move_node(self, node: "CanvasNode") -> None: - if self.src == node: - self.move_src() + def move_node(self, node_id: int, pos: Tuple[float, float]) -> None: + if self.src == node_id: + self.move_src(pos) else: - self.move_dst() + self.move_dst(pos) - def move_shadow(self, node: "ShadowNode") -> None: - if self.src_shadow == node: - self.move_src_shadow() - elif self.dst_shadow == node: - self.move_dst_shadow() - - def move_src_shadow(self) -> None: - if not self.id2: - return - _, _, _, _, dst_x, dst_y = self.dst.canvas.coords(self.id2) - dst_pos = dst_x, dst_y - self.moved2(self.src_shadow.position(), dst_pos) - - def move_dst_shadow(self) -> None: - if not self.id: - return - src_x, src_y, _, _, _, _ = self.src.canvas.coords(self.id) + def move_dst(self, dst_pos: Tuple[float, float]) -> None: + src_x, src_y, _, _, _, _ = self.canvas.coords(self.id) src_pos = src_x, src_y - self.moved(src_pos, self.dst_shadow.position()) + self.moved(src_pos, dst_pos) - def move_dst(self) -> None: - if self.dst.is_wireless() and self.has_shadows(): - return - dst_pos = self.dst.position() - if self.id2: - src_x, src_y, _, _, _, _ = self.dst.canvas.coords(self.id2) - src_pos = src_x, src_y - self.moved2(src_pos, dst_pos) - elif self.id: - src_x, src_y, _, _, _, _ = self.dst.canvas.coords(self.id) - src_pos = src_x, src_y - self.moved(src_pos, dst_pos) - - def move_src(self) -> None: - if not self.id: - return - _, _, _, _, dst_x, dst_y = self.src.canvas.coords(self.id) + def move_src(self, src_pos: Tuple[float, float]) -> None: + _, _, _, _, dst_x, dst_y = self.canvas.coords(self.id) dst_pos = dst_x, dst_y - self.moved(self.src.position(), dst_pos) + self.moved(src_pos, dst_pos) - def moved(self, src_pos: tuple[float, float], dst_pos: tuple[float, float]) -> None: + def moved(self, src_pos: Tuple[float, float], dst_pos: Tuple[float, float]) -> None: arc_pos = self._get_arcpoint(src_pos, dst_pos) - self.src.canvas.coords(self.id, *src_pos, *arc_pos, *dst_pos) + self.canvas.coords(self.id, *src_pos, *arc_pos, *dst_pos) if self.middle_label: - self.src.canvas.coords(self.middle_label, *arc_pos) - src_x, src_y, _, _, dst_x, dst_y = self.src.canvas.coords(self.id) - src_pos, dst_pos = node_label_positions(src_x, src_y, dst_x, dst_y) + self.canvas.coords(self.middle_label, *arc_pos) + src_pos, dst_pos = self.node_label_positions() if self.src_label: - self.src.canvas.coords(self.src_label, *src_pos) + self.canvas.coords(self.src_label, *src_pos) if self.dst_label: - self.src.canvas.coords(self.dst_label, *dst_pos) - - def moved2( - self, src_pos: tuple[float, float], dst_pos: tuple[float, float] - ) -> None: - arc_pos = self._get_arcpoint(src_pos, dst_pos) - self.dst.canvas.coords(self.id2, *src_pos, *arc_pos, *dst_pos) - if self.middle_label2: - self.dst.canvas.coords(self.middle_label2, *arc_pos) - src_x, src_y, _, _, dst_x, dst_y = self.dst.canvas.coords(self.id2) - src_pos, dst_pos = node_label_positions(src_x, src_y, dst_x, dst_y) - if self.src_label2: - self.dst.canvas.coords(self.src_label2, *src_pos) - if self.dst_label2: - self.dst.canvas.coords(self.dst_label2, *dst_pos) + self.canvas.coords(self.dst_label, *dst_pos) def delete(self) -> None: - logger.debug("deleting canvas edge, id: %s", self.id) - self.src.canvas.delete(self.id) - self.src.canvas.delete(self.src_label) - self.src.canvas.delete(self.dst_label) - if self.dst: - self.dst.canvas.delete(self.id2) - self.dst.canvas.delete(self.src_label2) - self.dst.canvas.delete(self.dst_label2) - if self.src_shadow and self.src_shadow.should_delete(): - self.src_shadow.delete() - self.src_shadow = None - if self.dst_shadow and self.dst_shadow.should_delete(): - self.dst_shadow.delete() - self.dst_shadow = None + logging.debug("deleting canvas edge, id: %s", self.id) + self.canvas.delete(self.id) + self.canvas.delete(self.src_label) + self.canvas.delete(self.dst_label) self.clear_middle_label() self.id = None - self.id2 = None self.src_label = None - self.src_label2 = None self.dst_label = None - self.dst_label2 = None - if self.dst: - self.arc_common_edges() - - def hide(self) -> None: - self.hidden = True - if self.src_shadow: - self.src_shadow.hide() - if self.dst_shadow: - self.dst_shadow.hide() - self.src.canvas.itemconfigure(self.id, state=tk.HIDDEN) - self.src.canvas.itemconfigure(self.src_label, state=tk.HIDDEN) - self.src.canvas.itemconfigure(self.dst_label, state=tk.HIDDEN) - self.src.canvas.itemconfigure(self.middle_label, state=tk.HIDDEN) - if self.id2: - self.dst.canvas.itemconfigure(self.id2, state=tk.HIDDEN) - self.dst.canvas.itemconfigure(self.src_label2, state=tk.HIDDEN) - self.dst.canvas.itemconfigure(self.dst_label2, state=tk.HIDDEN) - self.dst.canvas.itemconfigure(self.middle_label2, state=tk.HIDDEN) - - def show(self) -> None: - self.hidden = False - if self.src_shadow: - self.src_shadow.show() - if self.dst_shadow: - self.dst_shadow.show() - self.src.canvas.itemconfigure(self.id, state=tk.NORMAL) - state = self.manager.show_link_labels.state() - self.set_labels(state) - - def set_labels(self, state: str) -> None: - self.src.canvas.itemconfigure(self.src_label, state=state) - self.src.canvas.itemconfigure(self.dst_label, state=state) - self.src.canvas.itemconfigure(self.middle_label, state=state) - if self.id2: - self.dst.canvas.itemconfigure(self.id2, state=state) - self.dst.canvas.itemconfigure(self.src_label2, state=state) - self.dst.canvas.itemconfigure(self.dst_label2, state=state) - self.dst.canvas.itemconfigure(self.middle_label2, state=state) - - def other_node(self, node: "CanvasNode") -> "CanvasNode": - if self.src == node: - return self.dst - elif self.dst == node: - return self.src - else: - raise ValueError(f"node({node.core_node.name}) does not belong to edge") - - def other_iface(self, node: "CanvasNode") -> Optional[Interface]: - if self.src == node: - return self.link.iface2 if self.link else None - elif self.dst == node: - return self.link.iface1 if self.link else None - else: - raise ValueError(f"node({node.core_node.name}) does not belong to edge") - - def iface(self, node: "CanvasNode") -> Optional[Interface]: - if self.src == node: - return self.link.iface1 if self.link else None - elif self.dst == node: - return self.link.iface2 if self.link else None - else: - raise ValueError(f"node({node.core_node.name}) does not belong to edge") class CanvasWirelessEdge(Edge): @@ -485,44 +239,35 @@ class CanvasWirelessEdge(Edge): def __init__( self, - app: "Application", - src: "CanvasNode", - dst: "CanvasNode", + canvas: "CanvasGraph", + src: int, + dst: int, network_id: int, token: str, + src_pos: Tuple[float, float], + dst_pos: Tuple[float, float], link: Link, ) -> None: - logger.debug("drawing wireless link from node %s to node %s", src, dst) - super().__init__(app, src, dst) - self.src.wireless_edges.add(self) - self.dst.wireless_edges.add(self) + logging.debug("drawing wireless link from node %s to node %s", src, dst) + super().__init__(canvas, src, dst) self.network_id: int = network_id self.link: Link = link self.token: str = token self.width: float = WIRELESS_WIDTH color = link.color if link.color else WIRELESS_COLOR self.color: str = color - state = self.manager.show_wireless.state() - self.draw(state) + self.draw(src_pos, dst_pos, self.canvas.show_wireless.state()) if link.label: self.middle_label_text(link.label) - if self.src.hidden or self.dst.hidden: - self.hide() self.set_binding() - self.arc_common_edges() def set_binding(self) -> None: - self.src.canvas.tag_bind(self.id, "", self.show_info) - if self.id2 is not None: - self.dst.canvas.tag_bind(self.id2, "", self.show_info) + self.canvas.tag_bind(self.id, "", self.show_info) def show_info(self, _event: tk.Event) -> None: - self.app.display_info(WirelessEdgeInfoFrame, app=self.app, edge=self) - - def delete(self) -> None: - self.src.wireless_edges.discard(self) - self.dst.wireless_edges.remove(self) - super().delete() + self.canvas.app.display_info( + WirelessEdgeInfoFrame, app=self.canvas.app, edge=self + ) class CanvasEdge(Edge): @@ -531,44 +276,52 @@ class CanvasEdge(Edge): """ def __init__( - self, app: "Application", src: "CanvasNode", dst: "CanvasNode" = None + self, + canvas: "CanvasGraph", + src: int, + src_pos: Tuple[float, float], + dst_pos: Tuple[float, float], ) -> None: """ Create an instance of canvas edge object """ - super().__init__(app, src, dst) + super().__init__(canvas, src) self.text_src: Optional[int] = None self.text_dst: Optional[int] = None + self.link: Optional[Link] = None + self.linked_wireless: bool = False self.asymmetric_link: Optional[Link] = None self.throughput: Optional[float] = None - self.draw(tk.NORMAL) + self.draw(src_pos, dst_pos, tk.NORMAL) + self.set_binding() + self.context: tk.Menu = tk.Menu(self.canvas) + self.create_context() def is_customized(self) -> bool: return self.width != EDGE_WIDTH or self.color != EDGE_COLOR - def set_bindings(self) -> None: - if self.id: - show_context = functools.partial(self.show_context, self.src.canvas) - self.src.canvas.tag_bind(self.id, "", show_context) - self.src.canvas.tag_bind(self.id, "", self.show_info) - if self.id2: - show_context = functools.partial(self.show_context, self.dst.canvas) - self.dst.canvas.tag_bind(self.id2, "", show_context) - self.dst.canvas.tag_bind(self.id2, "", self.show_info) + def create_context(self) -> None: + themes.style_menu(self.context) + self.context.add_command(label="Configure", command=self.click_configure) + self.context.add_command(label="Delete", command=self.click_delete) + + def set_binding(self) -> None: + self.canvas.tag_bind(self.id, "", self.show_context) + self.canvas.tag_bind(self.id, "", self.show_info) def iface_label(self, iface: Interface) -> str: label = "" - if iface.name and self.manager.show_iface_names.get(): + if iface.name and self.canvas.show_iface_names.get(): label = f"{iface.name}" - if iface.ip4 and self.manager.show_ip4s.get(): + if iface.ip4 and self.canvas.show_ip4s.get(): label = f"{label}\n" if label else "" label += f"{iface.ip4}/{iface.ip4_mask}" - if iface.ip6 and self.manager.show_ip6s.get(): + if iface.ip6 and self.canvas.show_ip6s.get(): label = f"{label}\n" if label else "" label += f"{iface.ip6}/{iface.ip6_mask}" return label - def create_node_labels(self) -> tuple[str, str]: + def create_node_labels(self) -> Tuple[str, str]: label1 = None if self.link.iface1: label1 = self.iface_label(self.link.iface1) @@ -588,126 +341,82 @@ class CanvasEdge(Edge): super().redraw() self.draw_labels() - def show(self) -> None: - super().show() - self.check_visibility() - - def check_visibility(self) -> None: - state = tk.NORMAL - hide_links = self.manager.show_links.state() == tk.HIDDEN - if self.linked_wireless or hide_links: + def check_options(self) -> None: + if not self.link.options: + return + if self.link.options.loss == EDGE_LOSS: state = tk.HIDDEN - elif self.link.options: - hide_loss = self.manager.show_loss_links.state() == tk.HIDDEN - should_hide = self.link.options.loss >= EDGE_LOSS - if hide_loss and should_hide: - state = tk.HIDDEN - if self.id: - self.src.canvas.itemconfigure(self.id, state=state) - if self.id2: - self.dst.canvas.itemconfigure(self.id2, state=state) + self.canvas.addtag_withtag(tags.LOSS_EDGES, self.id) + else: + state = tk.NORMAL + self.canvas.dtag(self.id, tags.LOSS_EDGES) + if self.canvas.show_loss_links.state() == tk.HIDDEN: + self.canvas.itemconfigure(self.id, state=state) def set_throughput(self, throughput: float) -> None: throughput = 0.001 * throughput text = f"{throughput:.3f} kbps" self.middle_label_text(text) - if throughput > self.manager.throughput_threshold: - color = self.manager.throughput_color - width = self.manager.throughput_width + if throughput > self.canvas.throughput_threshold: + color = self.canvas.throughput_color + width = self.canvas.throughput_width else: color = self.color width = self.scaled_width() - self.src.canvas.itemconfig(self.id, fill=color, width=width) - if self.id2: - self.dst.canvas.itemconfig(self.id2, fill=color, width=width) + self.canvas.itemconfig(self.id, fill=color, width=width) def clear_throughput(self) -> None: self.clear_middle_label() if not self.linked_wireless: self.draw_link_options() - def complete(self, dst: "CanvasNode", link: Link = None) -> None: - logger.debug( - "completing wired link from node(%s) to node(%s)", - self.src.core_node.name, - dst.core_node.name, - ) + def complete(self, dst: int, linked_wireless: bool) -> None: self.dst = dst - self.linked_wireless = self.src.is_wireless() or self.dst.is_wireless() - self.set_bindings() + self.linked_wireless = linked_wireless + dst_pos = self.canvas.coords(self.dst) + self.move_dst(dst_pos) self.check_wireless() - if link is None: - link = self.app.core.ifaces_manager.create_link(self) - if link.iface1 and not nodeutils.is_rj45(self.src.core_node): - iface1 = link.iface1 - self.src.ifaces[iface1.id] = iface1 - if link.iface2 and not nodeutils.is_rj45(self.dst.core_node): - iface2 = link.iface2 - self.dst.ifaces[iface2.id] = iface2 - self.token = create_edge_token(link) - self.link = link - self.src.edges.add(self) - self.dst.edges.add(self) - if not self.linked_wireless: - self.arc_common_edges() - self.draw_labels() - self.check_visibility() - self.app.core.save_edge(self) - self.src.canvas.organize() - if self.has_shadows(): - self.dst.canvas.organize() - self.manager.edges[self.token] = self + logging.debug("draw wired link from node %s to node %s", self.src, dst) def check_wireless(self) -> None: - if not self.linked_wireless: - return - if self.id: - self.src.canvas.itemconfig(self.id, state=tk.HIDDEN) - self.src.canvas.dtag(self.id, tags.EDGE) - if self.id2: - self.dst.canvas.itemconfig(self.id2, state=tk.HIDDEN) - self.dst.canvas.dtag(self.id2, tags.EDGE) - # add antenna to node - if self.src.is_wireless() and not self.dst.is_wireless(): - self.dst.add_antenna() - elif not self.src.is_wireless() and self.dst.is_wireless(): - self.src.add_antenna() - else: - self.src.add_antenna() + if self.linked_wireless: + self.canvas.itemconfig(self.id, state=tk.HIDDEN) + self.canvas.dtag(self.id, tags.EDGE) + self._check_antenna() + + def _check_antenna(self) -> None: + src_node = self.canvas.nodes[self.src] + dst_node = self.canvas.nodes[self.dst] + src_node_type = src_node.core_node.type + dst_node_type = dst_node.core_node.type + is_src_wireless = NodeUtils.is_wireless_node(src_node_type) + is_dst_wireless = NodeUtils.is_wireless_node(dst_node_type) + if is_src_wireless or is_dst_wireless: + if is_src_wireless and not is_dst_wireless: + dst_node.add_antenna() + elif not is_src_wireless and is_dst_wireless: + src_node.add_antenna() + else: + src_node.add_antenna() def reset(self) -> None: - if self.middle_label: - self.src.canvas.delete(self.middle_label) - self.middle_label = None - if self.middle_label2: - self.dst.canvas.delete(self.middle_label2) - self.middle_label2 = None - if self.id: - self.src.canvas.itemconfig( - self.id, fill=self.color, width=self.scaled_width() - ) - if self.id2: - self.dst.canvas.itemconfig( - self.id2, fill=self.color, width=self.scaled_width() - ) + self.canvas.delete(self.middle_label) + self.middle_label = None + self.canvas.itemconfig(self.id, fill=self.color, width=self.scaled_width()) def show_info(self, _event: tk.Event) -> None: - self.app.display_info(EdgeInfoFrame, app=self.app, edge=self) + self.canvas.app.display_info(EdgeInfoFrame, app=self.canvas.app, edge=self) - def show_context(self, canvas: "CanvasGraph", event: tk.Event) -> None: - context: tk.Menu = tk.Menu(canvas) - themes.style_menu(context) - context.add_command(label="Configure", command=self.click_configure) - context.add_command(label="Delete", command=self.click_delete) - state = tk.DISABLED if self.app.core.is_runtime() else tk.NORMAL - context.entryconfigure(1, state=state) - context.tk_popup(event.x_root, event.y_root) + def show_context(self, event: tk.Event) -> None: + state = tk.DISABLED if self.canvas.core.is_runtime() else tk.NORMAL + self.context.entryconfigure(1, state=state) + self.context.tk_popup(event.x_root, event.y_root) def click_delete(self) -> None: - self.delete() + self.canvas.delete_edge(self) def click_configure(self) -> None: - dialog = LinkConfigurationDialog(self.app, self) + dialog = LinkConfigurationDialog(self.canvas.app, self) dialog.show() def draw_link_options(self): @@ -746,19 +455,3 @@ class CanvasEdge(Edge): lines.append(dup_line) label = "\n".join(lines) self.middle_label_text(label) - - def delete(self) -> None: - self.src.edges.discard(self) - if self.dst: - self.dst.edges.discard(self) - if self.link.iface1 and not nodeutils.is_rj45(self.src.core_node): - del self.src.ifaces[self.link.iface1.id] - if self.link.iface2 and not nodeutils.is_rj45(self.dst.core_node): - del self.dst.ifaces[self.link.iface2.id] - if self.src.is_wireless(): - self.dst.delete_antenna() - if self.dst.is_wireless(): - self.src.delete_antenna() - self.app.core.deleted_canvas_edges([self]) - super().delete() - self.manager.edges.pop(self.token, None) diff --git a/daemon/core/gui/graph/graph.py b/daemon/core/gui/graph/graph.py index 1a701239..199a2006 100644 --- a/daemon/core/gui/graph/graph.py +++ b/daemon/core/gui/graph/graph.py @@ -1,71 +1,94 @@ import logging import tkinter as tk from copy import deepcopy -from pathlib import Path -from typing import TYPE_CHECKING, Any, Optional +from tkinter import BooleanVar +from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple from PIL import Image from PIL.ImageTk import PhotoImage -from core.api.grpc.wrappers import Interface, Link -from core.gui import appconfig -from core.gui import nodeutils as nutils +from core.api.grpc.wrappers import ( + Interface, + Link, + LinkType, + Node, + Session, + ThroughputsEvent, +) from core.gui.dialogs.shapemod import ShapeDialog from core.gui.graph import tags -from core.gui.graph.edges import EDGE_WIDTH, CanvasEdge +from core.gui.graph.edges import ( + EDGE_WIDTH, + CanvasEdge, + CanvasWirelessEdge, + Edge, + arc_edges, + create_edge_token, + create_wireless_token, +) from core.gui.graph.enums import GraphMode, ScaleOption -from core.gui.graph.node import CanvasNode, ShadowNode +from core.gui.graph.node import CanvasNode from core.gui.graph.shape import Shape from core.gui.graph.shapeutils import ShapeType, is_draw_shape, is_marker - -logger = logging.getLogger(__name__) +from core.gui.images import ImageEnum, TypeToImage +from core.gui.nodeutils import NodeDraw, NodeUtils if TYPE_CHECKING: from core.gui.app import Application - from core.gui.graph.manager import CanvasManager from core.gui.coreclient import CoreClient -ZOOM_IN: float = 1.1 -ZOOM_OUT: float = 0.9 -MOVE_NODE_MODES: set[GraphMode] = {GraphMode.NODE, GraphMode.SELECT} -MOVE_SHAPE_MODES: set[GraphMode] = {GraphMode.ANNOTATION, GraphMode.SELECT} -BACKGROUND_COLOR: str = "#cccccc" +ZOOM_IN = 1.1 +ZOOM_OUT = 0.9 +ICON_SIZE = 48 +MOVE_NODE_MODES = {GraphMode.NODE, GraphMode.SELECT} +MOVE_SHAPE_MODES = {GraphMode.ANNOTATION, GraphMode.SELECT} + + +class ShowVar(BooleanVar): + def __init__(self, canvas: "CanvasGraph", tag: str, value: bool) -> None: + super().__init__(value=value) + self.canvas = canvas + self.tag = tag + + def state(self) -> str: + return tk.NORMAL if self.get() else tk.HIDDEN + + def click_handler(self) -> None: + self.canvas.itemconfigure(self.tag, state=self.state()) class CanvasGraph(tk.Canvas): def __init__( - self, - master: tk.BaseWidget, - app: "Application", - manager: "CanvasManager", - core: "CoreClient", - _id: int, - dimensions: tuple[int, int], + self, master: tk.BaseWidget, app: "Application", core: "CoreClient" ) -> None: - super().__init__(master, highlightthickness=0, background=BACKGROUND_COLOR) - self.id: int = _id + super().__init__(master, highlightthickness=0, background="#cccccc") self.app: "Application" = app - self.manager: "CanvasManager" = manager self.core: "CoreClient" = core - self.selection: dict[int, int] = {} + self.mode: GraphMode = GraphMode.SELECT + self.annotation_type: Optional[ShapeType] = None + self.selection: Dict[int, int] = {} self.select_box: Optional[Shape] = None self.selected: Optional[int] = None - self.nodes: dict[int, CanvasNode] = {} - self.shadow_nodes: dict[int, ShadowNode] = {} - self.shapes: dict[int, Shape] = {} - self.shadow_core_nodes: dict[int, ShadowNode] = {} + self.node_draw: Optional[NodeDraw] = None + self.nodes: Dict[int, CanvasNode] = {} + self.edges: Dict[str, CanvasEdge] = {} + self.shapes: Dict[int, Shape] = {} + self.wireless_edges: Dict[str, CanvasWirelessEdge] = {} # map wireless/EMANE node to the set of MDRs connected to that node - self.wireless_network: dict[int, set[int]] = {} + self.wireless_network: Dict[int, Set[int]] = {} self.drawing_edge: Optional[CanvasEdge] = None self.rect: Optional[int] = None self.shape_drawing: bool = False - self.current_dimensions: tuple[int, int] = dimensions + width = self.app.guiconfig.preferences.width + height = self.app.guiconfig.preferences.height + self.default_dimensions: Tuple[int, int] = (width, height) + self.current_dimensions: Tuple[int, int] = self.default_dimensions self.ratio: float = 1.0 - self.offset: tuple[int, int] = (0, 0) - self.cursor: tuple[int, int] = (0, 0) - self.to_copy: list[CanvasNode] = [] + self.offset: Tuple[int, int] = (0, 0) + self.cursor: Tuple[int, int] = (0, 0) + self.to_copy: List[CanvasNode] = [] # background related self.wallpaper_id: Optional[int] = None @@ -75,6 +98,23 @@ class CanvasGraph(tk.Canvas): self.scale_option: tk.IntVar = tk.IntVar(value=1) self.adjust_to_dim: tk.BooleanVar = tk.BooleanVar(value=False) + # throughput related + self.throughput_threshold: float = 250.0 + self.throughput_width: int = 10 + self.throughput_color: str = "#FF0000" + + # drawing related + self.show_node_labels: ShowVar = ShowVar(self, tags.NODE_LABEL, value=True) + self.show_link_labels: ShowVar = ShowVar(self, tags.LINK_LABEL, value=True) + self.show_links: ShowVar = ShowVar(self, tags.EDGE, value=True) + self.show_wireless: ShowVar = ShowVar(self, tags.WIRELESS_EDGE, value=True) + self.show_grid: ShowVar = ShowVar(self, tags.GRIDLINE, value=True) + self.show_annotations: ShowVar = ShowVar(self, tags.ANNOTATION, value=True) + self.show_loss_links: ShowVar = ShowVar(self, tags.LOSS_EDGES, value=True) + self.show_iface_names: BooleanVar = BooleanVar(value=False) + self.show_ip4s: BooleanVar = BooleanVar(value=True) + self.show_ip6s: BooleanVar = BooleanVar(value=True) + # bindings self.setup_bindings() @@ -82,11 +122,11 @@ class CanvasGraph(tk.Canvas): self.draw_canvas() self.draw_grid() - def draw_canvas(self, dimensions: tuple[int, int] = None) -> None: + def draw_canvas(self, dimensions: Tuple[int, int] = None) -> None: if self.rect is not None: self.delete(self.rect) if not dimensions: - dimensions = self.manager.default_dimensions + dimensions = self.default_dimensions self.current_dimensions = dimensions self.rect = self.create_rectangle( 0, @@ -99,19 +139,42 @@ class CanvasGraph(tk.Canvas): ) self.configure(scrollregion=self.bbox(tk.ALL)) + def reset_and_redraw(self, session: Session) -> None: + # reset view options to default state + self.show_node_labels.set(True) + self.show_link_labels.set(True) + self.show_grid.set(True) + self.show_annotations.set(True) + self.show_iface_names.set(False) + self.show_ip4s.set(True) + self.show_ip6s.set(True) + self.show_loss_links.set(True) + + # delete any existing drawn items + for tag in tags.RESET_TAGS: + self.delete(tag) + + # set the private variables to default value + self.mode = GraphMode.SELECT + self.annotation_type = None + self.node_draw = None + self.selected = None + self.nodes.clear() + self.edges.clear() + self.shapes.clear() + self.wireless_edges.clear() + self.wireless_network.clear() + self.drawing_edge = None + self.draw_session(session) + def setup_bindings(self) -> None: """ Bind any mouse events or hot keys to the matching action """ - self.bind("", self.copy_selected) - self.bind("", self.paste_selected) - self.bind("", self.cut_selected) - self.bind("", self.delete_selected) - self.bind("", self.hide_selected) self.bind("", self.click_press) self.bind("", self.click_release) self.bind("", self.click_motion) - self.bind("", self.delete_selected) + self.bind("", self.press_delete) self.bind("", self.ctrl_click) self.bind("", self.double_click) self.bind("", self.zoom) @@ -120,33 +183,37 @@ class CanvasGraph(tk.Canvas): self.bind("", lambda e: self.scan_mark(e.x, e.y)) self.bind("", lambda e: self.scan_dragto(e.x, e.y, gain=1)) - def get_shadow(self, node: CanvasNode) -> ShadowNode: - shadow_node = self.shadow_core_nodes.get(node.core_node.id) - if not shadow_node: - shadow_node = ShadowNode(self.app, self, node) - return shadow_node - - def get_actual_coords(self, x: float, y: float) -> tuple[float, float]: + def get_actual_coords(self, x: float, y: float) -> Tuple[float, float]: actual_x = (x - self.offset[0]) / self.ratio actual_y = (y - self.offset[1]) / self.ratio return actual_x, actual_y - def get_scaled_coords(self, x: float, y: float) -> tuple[float, float]: + def get_scaled_coords(self, x: float, y: float) -> Tuple[float, float]: scaled_x = (x * self.ratio) + self.offset[0] scaled_y = (y * self.ratio) + self.offset[1] return scaled_x, scaled_y - def inside_canvas(self, x: float, y: float) -> tuple[bool, bool]: + def inside_canvas(self, x: float, y: float) -> Tuple[bool, bool]: x1, y1, x2, y2 = self.bbox(self.rect) valid_x = x1 <= x <= x2 valid_y = y1 <= y <= y2 return valid_x and valid_y - def valid_position(self, x1: int, y1: int, x2: int, y2: int) -> tuple[bool, bool]: + def valid_position(self, x1: int, y1: int, x2: int, y2: int) -> Tuple[bool, bool]: valid_topleft = self.inside_canvas(x1, y1) valid_bottomright = self.inside_canvas(x2, y2) return valid_topleft and valid_bottomright + def set_throughputs(self, throughputs_event: ThroughputsEvent) -> None: + for iface_throughput in throughputs_event.iface_throughputs: + node_id = iface_throughput.node_id + iface_id = iface_throughput.iface_id + throughput = iface_throughput.throughput + iface_to_edge_id = (node_id, iface_id) + edge = self.core.iface_to_edge.get(iface_to_edge_id) + if edge: + edge.set_throughput(throughput) + def draw_grid(self) -> None: """ Create grid. @@ -161,7 +228,124 @@ class CanvasGraph(tk.Canvas): self.tag_lower(tags.GRIDLINE) self.tag_lower(self.rect) - def canvas_xy(self, event: tk.Event) -> tuple[float, float]: + def add_wired_edge(self, src: CanvasNode, dst: CanvasNode, link: Link) -> None: + token = create_edge_token(link) + if token in self.edges and link.options.unidirectional: + edge = self.edges[token] + edge.asymmetric_link = link + elif token not in self.edges: + node1 = src.core_node + node2 = dst.core_node + src_pos = (node1.position.x, node1.position.y) + dst_pos = (node2.position.x, node2.position.y) + edge = CanvasEdge(self, src.id, src_pos, dst_pos) + self.complete_edge(src, dst, edge, link) + + def delete_wired_edge(self, link: Link) -> None: + token = create_edge_token(link) + edge = self.edges.get(token) + if edge: + self.delete_edge(edge) + + def update_wired_edge(self, link: Link) -> None: + token = create_edge_token(link) + edge = self.edges.get(token) + if edge: + edge.link.options = deepcopy(link.options) + edge.draw_link_options() + edge.check_options() + + def add_wireless_edge(self, src: CanvasNode, dst: CanvasNode, link: Link) -> None: + network_id = link.network_id if link.network_id else None + token = create_wireless_token(src.id, dst.id, network_id) + if token in self.wireless_edges: + logging.warning("ignoring link that already exists: %s", link) + return + src_pos = self.coords(src.id) + dst_pos = self.coords(dst.id) + edge = CanvasWirelessEdge( + self, src.id, dst.id, network_id, token, src_pos, dst_pos, link + ) + self.wireless_edges[token] = edge + src.wireless_edges.add(edge) + dst.wireless_edges.add(edge) + self.tag_raise(src.id) + self.tag_raise(dst.id) + self.arc_common_edges(edge) + + def delete_wireless_edge( + self, src: CanvasNode, dst: CanvasNode, link: Link + ) -> None: + network_id = link.network_id if link.network_id else None + token = create_wireless_token(src.id, dst.id, network_id) + if token not in self.wireless_edges: + return + edge = self.wireless_edges.pop(token) + edge.delete() + src.wireless_edges.remove(edge) + dst.wireless_edges.remove(edge) + self.arc_common_edges(edge) + + def update_wireless_edge( + self, src: CanvasNode, dst: CanvasNode, link: Link + ) -> None: + if not link.label: + return + network_id = link.network_id if link.network_id else None + token = create_wireless_token(src.id, dst.id, network_id) + if token not in self.wireless_edges: + self.add_wireless_edge(src, dst, link) + else: + edge = self.wireless_edges[token] + edge.middle_label_text(link.label) + + def add_core_node(self, core_node: Node) -> None: + logging.debug("adding node: %s", core_node) + # if the gui can't find node's image, default to the "edit-node" image + image = NodeUtils.node_image(core_node, self.app.guiconfig, self.app.app_scale) + if not image: + image = self.app.get_icon(ImageEnum.EDITNODE, ICON_SIZE) + x = core_node.position.x + y = core_node.position.y + node = CanvasNode(self.app, x, y, core_node, image) + self.nodes[node.id] = node + self.core.set_canvas_node(core_node, node) + + def draw_session(self, session: Session) -> None: + """ + Draw existing session. + """ + # draw existing nodes + for core_node in session.nodes.values(): + logging.debug("drawing node: %s", core_node) + # peer to peer node is not drawn on the GUI + if NodeUtils.is_ignore_node(core_node.type): + continue + self.add_core_node(core_node) + # draw existing links + for link in session.links: + logging.debug("drawing link: %s", link) + canvas_node1 = self.core.get_canvas_node(link.node1_id) + canvas_node2 = self.core.get_canvas_node(link.node2_id) + if link.type == LinkType.WIRELESS: + self.add_wireless_edge(canvas_node1, canvas_node2, link) + else: + self.add_wired_edge(canvas_node1, canvas_node2, link) + + def stopped_session(self) -> None: + # clear wireless edges + for edge in self.wireless_edges.values(): + edge.delete() + src_node = self.nodes[edge.src] + src_node.wireless_edges.remove(edge) + dst_node = self.nodes[edge.dst] + dst_node.wireless_edges.remove(edge) + self.wireless_edges.clear() + + # clear throughputs + self.clear_throughputs() + + def canvas_xy(self, event: tk.Event) -> Tuple[float, float]: """ Convert window coordinate to canvas coordinate """ @@ -179,29 +363,31 @@ class CanvasGraph(tk.Canvas): for _id in overlapping: if self.drawing_edge and self.drawing_edge.id == _id: continue - elif _id in self.nodes: + + if _id in self.nodes: selected = _id - elif _id in self.shapes: - selected = _id - elif _id in self.shadow_nodes: + break + + if _id in self.shapes: selected = _id + return selected def click_release(self, event: tk.Event) -> None: """ Draw a node or finish drawing an edge according to the current graph mode """ - logger.debug("click release") + logging.debug("click release") x, y = self.canvas_xy(event) if not self.inside_canvas(x, y): return - if self.manager.mode == GraphMode.ANNOTATION: + if self.mode == GraphMode.ANNOTATION: self.focus_set() if self.shape_drawing: shape = self.shapes[self.selected] shape.shape_complete(x, y) self.shape_drawing = False - elif self.manager.mode == GraphMode.SELECT: + elif self.mode == GraphMode.SELECT: self.focus_set() if self.select_box: x0, y0, x1, y1 = self.coords(self.select_box.id) @@ -217,36 +403,61 @@ class CanvasGraph(tk.Canvas): else: self.focus_set() self.selected = self.get_selected(event) - logger.debug( - "click release selected(%s) mode(%s)", self.selected, self.manager.mode - ) - if self.manager.mode == GraphMode.EDGE: + logging.debug(f"click release selected({self.selected}) mode({self.mode})") + if self.mode == GraphMode.EDGE: self.handle_edge_release(event) - elif self.manager.mode == GraphMode.NODE: + elif self.mode == GraphMode.NODE: self.add_node(x, y) - elif self.manager.mode == GraphMode.PICKNODE: - self.manager.mode = GraphMode.NODE + elif self.mode == GraphMode.PICKNODE: + self.mode = GraphMode.NODE self.selected = None def handle_edge_release(self, _event: tk.Event) -> None: - # not drawing edge return - if not self.drawing_edge: - return edge = self.drawing_edge self.drawing_edge = None + + # not drawing edge return + if edge is None: + return + # edge dst must be a node - logger.debug("current selected: %s", self.selected) + logging.debug("current selected: %s", self.selected) + src_node = self.nodes.get(edge.src) dst_node = self.nodes.get(self.selected) - if not dst_node: + if not dst_node or not src_node: edge.delete() return - # check if node can be linked - if not edge.src.is_linkable(dst_node): + + # edge dst is same as src, delete edge + if edge.src == self.selected: edge.delete() return + + # rj45 nodes can only support one link + if NodeUtils.is_rj45_node(src_node.core_node.type) and src_node.edges: + edge.delete() + return + if NodeUtils.is_rj45_node(dst_node.core_node.type) and dst_node.edges: + edge.delete() + return + + # only 1 link between bridge based nodes + is_src_bridge = NodeUtils.is_bridge_node(src_node.core_node) + is_dst_bridge = NodeUtils.is_bridge_node(dst_node.core_node) + common_links = src_node.edges & dst_node.edges + if all([is_src_bridge, is_dst_bridge, common_links]): + edge.delete() + return + # finalize edge creation - edge.drawing(dst_node.position()) - edge.complete(dst_node) + self.complete_edge(src_node, dst_node, edge) + + def arc_common_edges(self, edge: Edge) -> None: + src_node = self.nodes[edge.src] + dst_node = self.nodes[edge.dst] + common_edges = list(src_node.edges & dst_node.edges) + common_edges += list(src_node.wireless_edges & dst_node.wireless_edges) + arc_edges(common_edges) def select_object(self, object_id: int, choose_multiple: bool = False) -> None: """ @@ -282,7 +493,7 @@ class CanvasGraph(tk.Canvas): if select_id is not None: self.move(select_id, x_offset, y_offset) - def delete_selected_objects(self, _event: tk.Event = None) -> None: + def delete_selected_objects(self) -> None: edges = set() nodes = [] for object_id in self.selection: @@ -293,16 +504,28 @@ class CanvasGraph(tk.Canvas): # delete node and related edges if object_id in self.nodes: canvas_node = self.nodes.pop(object_id) + canvas_node.delete() + nodes.append(canvas_node) + is_wireless = NodeUtils.is_wireless_node(canvas_node.core_node.type) # delete related edges - while canvas_node.edges: - edge = canvas_node.edges.pop() + for edge in canvas_node.edges: if edge in edges: continue edges.add(edge) + del self.edges[edge.token] edge.delete() - # delete node - canvas_node.delete() - nodes.append(canvas_node) + # update node connected to edge being deleted + other_id = edge.src + other_iface = edge.link.iface1 + if edge.src == object_id: + other_id = edge.dst + other_iface = edge.link.iface2 + other_node = self.nodes[other_id] + other_node.edges.remove(edge) + if other_iface: + del other_node.ifaces[other_iface.id] + if is_wireless: + other_node.delete_antenna() # delete shape if object_id in self.shapes: @@ -311,21 +534,44 @@ class CanvasGraph(tk.Canvas): self.selection.clear() self.core.deleted_canvas_nodes(nodes) + self.core.deleted_canvas_edges(edges) - def hide_selected(self, _event: tk.Event = None) -> None: + def hide_selected_objects(self) -> None: + edges = set() for object_id in self.selection: # delete selection box selection_id = self.selection[object_id] self.delete(selection_id) + # hide node and related edges if object_id in self.nodes: canvas_node = self.nodes[object_id] canvas_node.hide() + # hide related edges + for edge in canvas_node.edges: + if edge in edges: + continue + edges.add(edge) - def show_hidden(self) -> None: - for node in self.nodes.values(): - if node.hidden: - node.show() + def delete_edge(self, edge: CanvasEdge) -> None: + edge.delete() + del self.edges[edge.token] + src_node = self.nodes[edge.src] + src_node.edges.discard(edge) + if edge.link.iface1: + del src_node.ifaces[edge.link.iface1.id] + dst_node = self.nodes[edge.dst] + dst_node.edges.discard(edge) + if edge.link.iface2: + del dst_node.ifaces[edge.link.iface2.id] + src_wireless = NodeUtils.is_wireless_node(src_node.core_node.type) + if src_wireless: + dst_node.delete_antenna() + dst_wireless = NodeUtils.is_wireless_node(dst_node.core_node.type) + if dst_wireless: + src_node.delete_antenna() + self.core.deleted_canvas_edges([edge]) + self.arc_common_edges(edge) def zoom(self, event: tk.Event, factor: float = None) -> None: if not factor: @@ -338,8 +584,8 @@ class CanvasGraph(tk.Canvas): self.offset[0] * factor + event.x * (1 - factor), self.offset[1] * factor + event.y * (1 - factor), ) - logger.debug("ratio: %s", self.ratio) - logger.debug("offset: %s", self.offset) + logging.debug("ratio: %s", self.ratio) + logging.debug("offset: %s", self.offset) self.app.statusbar.set_zoom(self.ratio) if self.wallpaper: self.redraw_wallpaper() @@ -354,18 +600,18 @@ class CanvasGraph(tk.Canvas): self.cursor = x, y selected = self.get_selected(event) - logger.debug("click press(%s): %s", self.cursor, selected) + logging.debug("click press(%s): %s", self.cursor, selected) x_check = self.cursor[0] - self.offset[0] y_check = self.cursor[1] - self.offset[1] - logger.debug("click press offset(%s, %s)", x_check, y_check) + logging.debug("click press offset(%s, %s)", x_check, y_check) is_node = selected in self.nodes - if self.manager.mode == GraphMode.EDGE and is_node: - node = self.nodes[selected] - self.drawing_edge = CanvasEdge(self.app, node) + if self.mode == GraphMode.EDGE and is_node: + pos = self.coords(selected) + self.drawing_edge = CanvasEdge(self, selected, pos, pos) self.organize() - if self.manager.mode == GraphMode.ANNOTATION: - if is_marker(self.manager.annotation_type): + if self.mode == GraphMode.ANNOTATION: + if is_marker(self.annotation_type): r = self.app.toolbar.marker_frame.size.get() self.create_oval( x - r, @@ -375,11 +621,11 @@ class CanvasGraph(tk.Canvas): fill=self.app.toolbar.marker_frame.color, outline="", tags=(tags.MARKER, tags.ANNOTATION), - state=self.manager.show_annotations.state(), + state=self.show_annotations.state(), ) return if selected is None: - shape = Shape(self.app, self, self.manager.annotation_type, x, y) + shape = Shape(self.app, self, self.annotation_type, x, y) self.selected = shape.id self.shape_drawing = True self.shapes[shape.id] = shape @@ -394,24 +640,14 @@ class CanvasGraph(tk.Canvas): node = self.nodes[selected] self.select_object(node.id) self.selected = selected - logger.debug( + logging.debug( "selected node(%s), coords: (%s, %s)", node.core_node.name, node.core_node.position.x, node.core_node.position.y, ) - elif selected in self.shadow_nodes: - shadow_node = self.shadow_nodes[selected] - self.select_object(shadow_node.id) - self.selected = selected - logger.debug( - "selected shadow node(%s), coords: (%s, %s)", - shadow_node.node.core_node.name, - shadow_node.node.core_node.position.x, - shadow_node.node.core_node.position.y, - ) else: - if self.manager.mode == GraphMode.SELECT: + if self.mode == GraphMode.SELECT: shape = Shape(self.app, self, ShapeType.RECTANGLE, x, y) self.select_box = shape self.clear_selection() @@ -425,7 +661,7 @@ class CanvasGraph(tk.Canvas): self.cursor = x, y # handle multiple selections - logger.debug("control left click: %s", event) + logging.debug("control left click: %s", event) selected = self.get_selected(event) if ( selected not in self.selection @@ -440,7 +676,7 @@ class CanvasGraph(tk.Canvas): if self.select_box: self.select_box.delete() self.select_box = None - if is_draw_shape(self.manager.annotation_type) and self.shape_drawing: + if is_draw_shape(self.annotation_type) and self.shape_drawing: shape = self.shapes.pop(self.selected) shape.delete() self.shape_drawing = False @@ -450,14 +686,14 @@ class CanvasGraph(tk.Canvas): y_offset = y - self.cursor[1] self.cursor = x, y - if self.manager.mode == GraphMode.EDGE and self.drawing_edge is not None: - self.drawing_edge.drawing(self.cursor) - if self.manager.mode == GraphMode.ANNOTATION: - if is_draw_shape(self.manager.annotation_type) and self.shape_drawing: + if self.mode == GraphMode.EDGE and self.drawing_edge is not None: + self.drawing_edge.move_dst(self.cursor) + if self.mode == GraphMode.ANNOTATION: + if is_draw_shape(self.annotation_type) and self.shape_drawing: shape = self.shapes[self.selected] shape.shape_motion(x, y) return - elif is_marker(self.manager.annotation_type): + elif is_marker(self.annotation_type): r = self.app.toolbar.marker_frame.size.get() self.create_oval( x - r, @@ -470,28 +706,34 @@ class CanvasGraph(tk.Canvas): ) return - if self.manager.mode == GraphMode.EDGE: + if self.mode == GraphMode.EDGE: return # move selected objects if self.selection: for selected_id in self.selection: - if self.manager.mode in MOVE_SHAPE_MODES and selected_id in self.shapes: + if self.mode in MOVE_SHAPE_MODES and selected_id in self.shapes: shape = self.shapes[selected_id] shape.motion(x_offset, y_offset) - elif self.manager.mode in MOVE_NODE_MODES and selected_id in self.nodes: + + if self.mode in MOVE_NODE_MODES and selected_id in self.nodes: node = self.nodes[selected_id] node.motion(x_offset, y_offset, update=self.core.is_runtime()) - elif ( - self.manager.mode in MOVE_NODE_MODES - and selected_id in self.shadow_nodes - ): - shadow_node = self.shadow_nodes[selected_id] - shadow_node.motion(x_offset, y_offset) else: - if self.select_box and self.manager.mode == GraphMode.SELECT: + if self.select_box and self.mode == GraphMode.SELECT: self.select_box.shape_motion(x, y) + def press_delete(self, _event: tk.Event) -> None: + """ + delete selected nodes and any data that relates to it + """ + logging.debug("press delete key") + if not self.app.core.is_runtime(): + self.delete_selected_objects() + self.app.default_info() + else: + logging.debug("node deletion is disabled during runtime state") + def double_click(self, event: tk.Event) -> None: selected = self.get_selected(event) if selected is not None and selected in self.shapes: @@ -504,19 +746,21 @@ class CanvasGraph(tk.Canvas): return actual_x, actual_y = self.get_actual_coords(x, y) core_node = self.core.create_node( - actual_x, - actual_y, - self.manager.node_draw.node_type, - self.manager.node_draw.model, + actual_x, actual_y, self.node_draw.node_type, self.node_draw.model ) if not core_node: return - core_node.canvas = self.id - node = CanvasNode(self.app, self, x, y, core_node, self.manager.node_draw.image) + try: + image_enum = self.node_draw.image_enum + self.node_draw.image = self.app.get_icon(image_enum, ICON_SIZE) + except AttributeError: + image_file = self.node_draw.image_file + self.node_draw.image = self.app.get_custom_icon(image_file, ICON_SIZE) + node = CanvasNode(self.app, x, y, core_node, self.node_draw.image) self.nodes[node.id] = node self.core.set_canvas_node(core_node, node) - def width_and_height(self) -> tuple[int, int]: + def width_and_height(self) -> Tuple[int, int]: """ retrieve canvas width and height in pixels """ @@ -601,11 +845,11 @@ class CanvasGraph(tk.Canvas): self.redraw_canvas((image.width(), image.height())) self.draw_wallpaper(image) - def redraw_canvas(self, dimensions: tuple[int, int] = None) -> None: - logger.debug("redrawing canvas to dimensions: %s", dimensions) + def redraw_canvas(self, dimensions: Tuple[int, int] = None) -> None: + logging.debug("redrawing canvas to dimensions: %s", dimensions) # reset scale and move back to original position - logger.debug("resetting scaling: %s %s", self.ratio, self.offset) + logging.debug("resetting scaling: %s %s", self.ratio, self.offset) factor = 1 / self.ratio self.scale(tk.ALL, self.offset[0], self.offset[1], factor, factor) self.move(tk.ALL, -self.offset[0], -self.offset[1]) @@ -620,15 +864,15 @@ class CanvasGraph(tk.Canvas): # redraw gridlines to new canvas size self.delete(tags.GRIDLINE) self.draw_grid() - self.app.manager.show_grid.click_handler() + self.app.canvas.show_grid.click_handler() def redraw_wallpaper(self) -> None: if self.adjust_to_dim.get(): - logger.debug("drawing wallpaper to canvas dimensions") + logging.debug("drawing wallpaper to canvas dimensions") self.resize_to_wallpaper() else: option = ScaleOption(self.scale_option.get()) - logger.debug("drawing canvas using scaling option: %s", option) + logging.debug("drawing canvas using scaling option: %s", option) if option == ScaleOption.UPPER_LEFT: self.wallpaper_upper_left() elif option == ScaleOption.CENTERED: @@ -636,7 +880,7 @@ class CanvasGraph(tk.Canvas): elif option == ScaleOption.SCALED: self.wallpaper_scaled() elif option == ScaleOption.TILED: - logger.warning("tiled background not implemented yet") + logging.warning("tiled background not implemented yet") self.organize() def organize(self) -> None: @@ -644,7 +888,7 @@ class CanvasGraph(tk.Canvas): self.tag_raise(tag) def set_wallpaper(self, filename: Optional[str]) -> None: - logger.info("setting canvas(%s) background: %s", self.id, filename) + logging.debug("setting wallpaper: %s", filename) if filename: img = Image.open(filename) self.wallpaper = img @@ -657,48 +901,58 @@ class CanvasGraph(tk.Canvas): self.wallpaper_file = None def is_selection_mode(self) -> bool: - return self.manager.mode == GraphMode.SELECT + return self.mode == GraphMode.SELECT def create_edge(self, src: CanvasNode, dst: CanvasNode) -> CanvasEdge: """ create an edge between source node and destination node """ - edge = CanvasEdge(self.app, src) - edge.complete(dst) + pos = (src.core_node.position.x, src.core_node.position.y) + edge = CanvasEdge(self, src.id, pos, pos) + self.complete_edge(src, dst, edge) return edge - def copy_selected(self, _event: tk.Event = None) -> None: + def complete_edge( + self, + src: CanvasNode, + dst: CanvasNode, + edge: CanvasEdge, + link: Optional[Link] = None, + ) -> None: + linked_wireless = self.is_linked_wireless(src.id, dst.id) + edge.complete(dst.id, linked_wireless) + if link is None: + link = self.core.create_link(edge, src, dst) + edge.link = link + if link.iface1: + iface1 = link.iface1 + src.ifaces[iface1.id] = iface1 + if link.iface2: + iface2 = link.iface2 + dst.ifaces[iface2.id] = iface2 + src.edges.add(edge) + dst.edges.add(edge) + edge.token = create_edge_token(edge.link) + self.arc_common_edges(edge) + edge.draw_labels() + edge.check_options() + self.edges[edge.token] = edge + self.core.save_edge(edge, src, dst) + + def copy(self) -> None: if self.core.is_runtime(): - logger.debug("copy is disabled during runtime state") + logging.debug("copy is disabled during runtime state") return if self.selection: - logger.debug("to copy nodes: %s", self.selection) + logging.debug("to copy nodes: %s", self.selection) self.to_copy.clear() for node_id in self.selection.keys(): canvas_node = self.nodes[node_id] self.to_copy.append(canvas_node) - def cut_selected(self, _event: tk.Event = None) -> None: + def paste(self) -> None: if self.core.is_runtime(): - logger.debug("cut is disabled during runtime state") - return - self.copy_selected() - self.delete_selected() - - def delete_selected(self, _event: tk.Event = None) -> None: - """ - delete selected nodes and any data that relates to it - """ - logger.debug("press delete key") - if self.core.is_runtime(): - logger.debug("node deletion is disabled during runtime state") - return - self.delete_selected_objects() - self.app.default_info() - - def paste_selected(self, _event: tk.Event = None) -> None: - if self.core.is_runtime(): - logger.debug("paste is disabled during runtime state") + logging.debug("paste is disabled during runtime state") return # maps original node canvas id to copy node canvas id copy_map = {} @@ -715,9 +969,7 @@ class CanvasGraph(tk.Canvas): ) if not copy: continue - node = CanvasNode( - self.app, self, scaled_x, scaled_y, copy, canvas_node.image - ) + node = CanvasNode(self.app, scaled_x, scaled_y, copy, canvas_node.image) # copy configurations and services node.core_node.services = core_node.services.copy() node.core_node.config_services = core_node.config_services.copy() @@ -804,49 +1056,49 @@ class CanvasGraph(tk.Canvas): ) self.tag_raise(tags.NODE) + def is_linked_wireless(self, src: int, dst: int) -> bool: + src_node = self.nodes[src] + dst_node = self.nodes[dst] + src_node_type = src_node.core_node.type + dst_node_type = dst_node.core_node.type + is_src_wireless = NodeUtils.is_wireless_node(src_node_type) + is_dst_wireless = NodeUtils.is_wireless_node(dst_node_type) + + # update the wlan/EMANE network + wlan_network = self.wireless_network + if is_src_wireless and not is_dst_wireless: + if src not in wlan_network: + wlan_network[src] = set() + wlan_network[src].add(dst) + elif not is_src_wireless and is_dst_wireless: + if dst not in wlan_network: + wlan_network[dst] = set() + wlan_network[dst].add(src) + return is_src_wireless or is_dst_wireless + + def clear_throughputs(self) -> None: + for edge in self.edges.values(): + edge.clear_throughput() + def scale_graph(self) -> None: - for node_id, canvas_node in self.nodes.items(): - image = nutils.get_icon(canvas_node.core_node, self.app) - self.itemconfig(node_id, image=image) - canvas_node.image = image + for nid, canvas_node in self.nodes.items(): + img = None + if NodeUtils.is_custom( + canvas_node.core_node.type, canvas_node.core_node.model + ): + for custom_node in self.app.guiconfig.nodes: + if custom_node.name == canvas_node.core_node.model: + img = self.app.get_custom_icon(custom_node.image, ICON_SIZE) + else: + image_enum = TypeToImage.get( + canvas_node.core_node.type, canvas_node.core_node.model + ) + img = self.app.get_icon(image_enum, ICON_SIZE) + + self.itemconfig(nid, image=img) + canvas_node.image = img canvas_node.scale_text() canvas_node.scale_antennas() - for edge_id in self.find_withtag(tags.EDGE): - self.itemconfig(edge_id, width=int(EDGE_WIDTH * self.app.app_scale)) - def get_metadata(self) -> dict[str, Any]: - wallpaper_path = None - if self.wallpaper_file: - wallpaper = Path(self.wallpaper_file) - if appconfig.BACKGROUNDS_PATH == wallpaper.parent: - wallpaper_path = wallpaper.name - else: - wallpaper_path = str(wallpaper) - return dict( - id=self.id, - wallpaper=wallpaper_path, - wallpaper_style=self.scale_option.get(), - fit_image=self.adjust_to_dim.get(), - dimensions=self.current_dimensions, - ) - - def parse_metadata(self, config: dict[str, Any]) -> None: - fit_image = config.get("fit_image", False) - self.adjust_to_dim.set(fit_image) - wallpaper_style = config.get("wallpaper_style", 1) - self.scale_option.set(wallpaper_style) - dimensions = config.get("dimensions") - if dimensions: - self.redraw_canvas(dimensions) - wallpaper = config.get("wallpaper") - if wallpaper: - wallpaper = Path(wallpaper) - if not wallpaper.is_file(): - wallpaper = appconfig.BACKGROUNDS_PATH.joinpath(wallpaper) - logger.info("canvas(%s), wallpaper: %s", self.id, wallpaper) - if wallpaper.is_file(): - self.set_wallpaper(str(wallpaper)) - else: - self.app.show_error( - "Background Error", f"background file not found: {wallpaper}" - ) + for edge_id in self.find_withtag(tags.EDGE): + self.itemconfig(edge_id, width=int(EDGE_WIDTH * self.app.app_scale)) diff --git a/daemon/core/gui/graph/manager.py b/daemon/core/gui/graph/manager.py deleted file mode 100644 index b2745f5c..00000000 --- a/daemon/core/gui/graph/manager.py +++ /dev/null @@ -1,434 +0,0 @@ -import json -import logging -import tkinter as tk -from collections.abc import ValuesView -from copy import deepcopy -from tkinter import BooleanVar, messagebox, ttk -from typing import TYPE_CHECKING, Any, Literal, Optional - -from core.api.grpc.wrappers import Link, LinkType, Node, Session, ThroughputsEvent -from core.gui import nodeutils as nutils -from core.gui.graph import tags -from core.gui.graph.edges import ( - CanvasEdge, - CanvasWirelessEdge, - create_edge_token, - create_wireless_token, -) -from core.gui.graph.enums import GraphMode -from core.gui.graph.graph import CanvasGraph -from core.gui.graph.node import CanvasNode -from core.gui.graph.shape import Shape -from core.gui.graph.shapeutils import ShapeType -from core.gui.nodeutils import NodeDraw - -logger = logging.getLogger(__name__) - -if TYPE_CHECKING: - from core.gui.app import Application - from core.gui.coreclient import CoreClient - - -class ShowVar(BooleanVar): - def __init__(self, manager: "CanvasManager", tag: str, value: bool) -> None: - super().__init__(value=value) - self.manager: "CanvasManager" = manager - self.tag: str = tag - - def state(self) -> Literal["normal", "hidden"]: - return tk.NORMAL if self.get() else tk.HIDDEN - - def click_handler(self) -> None: - for canvas in self.manager.all(): - canvas.itemconfigure(self.tag, state=self.state()) - - -class ShowNodeLabels(ShowVar): - def click_handler(self) -> None: - state = self.state() - for canvas in self.manager.all(): - for node in canvas.nodes.values(): - if not node.hidden: - node.set_label(state) - - -class ShowLinks(ShowVar): - def click_handler(self) -> None: - for edge in self.manager.edges.values(): - if not edge.hidden: - edge.check_visibility() - - -class ShowLinkLabels(ShowVar): - def click_handler(self) -> None: - state = self.state() - for edge in self.manager.edges.values(): - if not edge.hidden: - edge.set_labels(state) - - -class CanvasManager: - def __init__( - self, master: tk.BaseWidget, app: "Application", core: "CoreClient" - ) -> None: - self.master: tk.BaseWidget = master - self.app: "Application" = app - self.core: "CoreClient" = core - - # canvas interactions - self.mode: GraphMode = GraphMode.SELECT - self.annotation_type: Optional[ShapeType] = None - self.node_draw: Optional[NodeDraw] = None - self.canvases: dict[int, CanvasGraph] = {} - - # global edge management - self.edges: dict[str, CanvasEdge] = {} - self.wireless_edges: dict[str, CanvasWirelessEdge] = {} - - # global canvas settings - self.default_dimensions: tuple[int, int] = ( - self.app.guiconfig.preferences.width, - self.app.guiconfig.preferences.height, - ) - self.show_node_labels: ShowVar = ShowNodeLabels( - self, tags.NODE_LABEL, value=True - ) - self.show_link_labels: ShowVar = ShowLinkLabels( - self, tags.LINK_LABEL, value=True - ) - self.show_links: ShowVar = ShowLinks(self, tags.EDGE, value=True) - self.show_wireless: ShowVar = ShowVar(self, tags.WIRELESS_EDGE, value=True) - self.show_grid: ShowVar = ShowVar(self, tags.GRIDLINE, value=True) - self.show_annotations: ShowVar = ShowVar(self, tags.ANNOTATION, value=True) - self.show_loss_links: ShowVar = ShowLinks(self, tags.LOSS_EDGES, value=True) - self.show_iface_names: BooleanVar = BooleanVar(value=False) - self.show_ip4s: BooleanVar = BooleanVar(value=True) - self.show_ip6s: BooleanVar = BooleanVar(value=True) - - # throughput settings - self.throughput_threshold: float = 250.0 - self.throughput_width: int = 10 - self.throughput_color: str = "#FF0000" - - # widget - self.notebook: Optional[ttk.Notebook] = None - self.canvas_ids: dict[str, int] = {} - self.unique_ids: dict[int, str] = {} - self.draw() - - self.setup_bindings() - # start with a single tab by default - self.add_canvas() - - def setup_bindings(self) -> None: - self.notebook.bind("<>", self.tab_change) - - def tab_change(self, _event: tk.Event) -> None: - # ignore tab change events before tab data has been setup - unique_id = self.notebook.select() - if not unique_id or unique_id not in self.canvas_ids: - return - canvas = self.current() - self.app.statusbar.set_zoom(canvas.ratio) - - def select(self, tab_id: int): - unique_id = self.unique_ids.get(tab_id) - self.notebook.select(unique_id) - - def draw(self) -> None: - self.notebook = ttk.Notebook(self.master) - self.notebook.grid(sticky=tk.NSEW, pady=1) - - def _next_id(self) -> int: - _id = 1 - canvas_ids = set(self.canvas_ids.values()) - while _id in canvas_ids: - _id += 1 - return _id - - def current(self) -> CanvasGraph: - unique_id = self.notebook.select() - canvas_id = self.canvas_ids[unique_id] - return self.get(canvas_id) - - def all(self) -> ValuesView[CanvasGraph]: - return self.canvases.values() - - def get(self, canvas_id: int) -> CanvasGraph: - canvas = self.canvases.get(canvas_id) - if not canvas: - canvas = self.add_canvas(canvas_id) - return canvas - - def add_canvas(self, canvas_id: int = None) -> CanvasGraph: - # create tab frame - tab = ttk.Frame(self.notebook, padding=0) - tab.grid(sticky=tk.NSEW) - tab.columnconfigure(0, weight=1) - tab.rowconfigure(0, weight=1) - if canvas_id is None: - canvas_id = self._next_id() - self.notebook.add(tab, text=f"Canvas {canvas_id}") - unique_id = self.notebook.tabs()[-1] - logger.info("creating canvas(%s)", canvas_id) - self.canvas_ids[unique_id] = canvas_id - self.unique_ids[canvas_id] = unique_id - - # create canvas - canvas = CanvasGraph( - tab, self.app, self, self.core, canvas_id, self.default_dimensions - ) - canvas.grid(sticky=tk.NSEW) - self.canvases[canvas_id] = canvas - - # add scrollbars - scroll_y = ttk.Scrollbar(tab, command=canvas.yview) - scroll_y.grid(row=0, column=1, sticky=tk.NS) - scroll_x = ttk.Scrollbar(tab, orient=tk.HORIZONTAL, command=canvas.xview) - scroll_x.grid(row=1, column=0, sticky=tk.EW) - canvas.configure(xscrollcommand=scroll_x.set) - canvas.configure(yscrollcommand=scroll_y.set) - return canvas - - def delete_canvas(self) -> None: - if len(self.notebook.tabs()) == 1: - messagebox.showinfo("Canvas", "Cannot delete last canvas", parent=self.app) - return - unique_id = self.notebook.select() - self.notebook.forget(unique_id) - canvas_id = self.canvas_ids.pop(unique_id) - canvas = self.canvases.pop(canvas_id) - edges = set() - for node in canvas.nodes.values(): - node.delete() - while node.edges: - edge = node.edges.pop() - if edge in edges: - continue - edges.add(edge) - edge.delete() - - def join(self, session: Session) -> None: - # clear out all canvases - for canvas_id in self.notebook.tabs(): - self.notebook.forget(canvas_id) - self.canvases.clear() - self.canvas_ids.clear() - self.unique_ids.clear() - self.edges.clear() - self.wireless_edges.clear() - logger.info("cleared canvases") - - # reset settings - self.show_node_labels.set(True) - self.show_link_labels.set(True) - self.show_grid.set(True) - self.show_annotations.set(True) - self.show_iface_names.set(False) - self.show_ip4s.set(True) - self.show_ip6s.set(True) - self.show_loss_links.set(True) - self.mode = GraphMode.SELECT - self.annotation_type = None - self.node_draw = None - - # draw session - self.draw_session(session) - - def draw_session(self, session: Session) -> None: - # draw canvas configurations and shapes - self.parse_metadata_canvas(session.metadata) - self.parse_metadata_shapes(session.metadata) - - # create session nodes - for core_node in session.nodes.values(): - # add node, avoiding ignored nodes - if nutils.should_ignore(core_node): - continue - self.add_core_node(core_node) - - # organize canvas tabs - canvas_ids = sorted(self.canvases) - for index, canvas_id in enumerate(canvas_ids): - canvas = self.canvases[canvas_id] - self.notebook.insert(index, canvas.master) - - # draw existing links - for link in session.links: - node1 = self.core.get_canvas_node(link.node1_id) - node2 = self.core.get_canvas_node(link.node2_id) - if link.type == LinkType.WIRELESS: - self.add_wireless_edge(node1, node2, link) - else: - self.add_wired_edge(node1, node2, link) - - # organize canvas order - for canvas in self.canvases.values(): - canvas.organize() - - # parse metada for edge configs and hidden nodes - self.parse_metadata_edges(session.metadata) - self.parse_metadata_hidden(session.metadata) - - # create a default canvas if none were created prior - if not self.canvases: - self.add_canvas() - - def redraw_canvas(self, dimensions: tuple[int, int]) -> None: - canvas = self.current() - canvas.redraw_canvas(dimensions) - if canvas.wallpaper: - canvas.redraw_wallpaper() - - def get_metadata(self) -> dict[str, Any]: - canvases = [x.get_metadata() for x in self.all()] - return dict(gridlines=self.show_grid.get(), canvases=canvases) - - def parse_metadata_canvas(self, metadata: dict[str, Any]) -> None: - # canvas setting - canvas_config = metadata.get("canvas") - logger.debug("canvas metadata: %s", canvas_config) - if not canvas_config: - return - canvas_config = json.loads(canvas_config) - # get configured dimensions and gridlines option - gridlines = canvas_config.get("gridlines", True) - self.show_grid.set(gridlines) - - # get background configurations - for canvas_config in canvas_config.get("canvases", []): - canvas_id = canvas_config.get("id") - if canvas_id is None: - logger.error("canvas config id not provided") - continue - canvas = self.get(canvas_id) - canvas.parse_metadata(canvas_config) - - def parse_metadata_shapes(self, metadata: dict[str, Any]) -> None: - # load saved shapes - shapes_config = metadata.get("shapes") - if not shapes_config: - return - shapes_config = json.loads(shapes_config) - for shape_config in shapes_config: - logger.debug("loading shape: %s", shape_config) - Shape.from_metadata(self.app, shape_config) - - def parse_metadata_edges(self, metadata: dict[str, Any]) -> None: - # load edges config - edges_config = metadata.get("edges") - if not edges_config: - return - edges_config = json.loads(edges_config) - logger.info("edges config: %s", edges_config) - for edge_config in edges_config: - edge_token = edge_config["token"] - edge = self.core.links.get(edge_token) - if edge: - edge.width = edge_config["width"] - edge.color = edge_config["color"] - edge.redraw() - else: - logger.warning("invalid edge token to configure: %s", edge_token) - - def parse_metadata_hidden(self, metadata: dict[str, Any]) -> None: - # read hidden nodes - hidden_config = metadata.get("hidden") - if not hidden_config: - return - hidden_config = json.loads(hidden_config) - for node_id in hidden_config: - canvas_node = self.core.canvas_nodes.get(node_id) - if canvas_node: - canvas_node.hide() - else: - logger.warning("invalid node to hide: %s", node_id) - - def add_core_node(self, core_node: Node) -> None: - # get canvas tab for node - canvas_id = core_node.canvas if core_node.canvas > 0 else 1 - logger.info("adding core node canvas(%s): %s", core_node.name, canvas_id) - canvas = self.get(canvas_id) - image = nutils.get_icon(core_node, self.app) - x = core_node.position.x - y = core_node.position.y - node = CanvasNode(self.app, canvas, x, y, core_node, image) - canvas.nodes[node.id] = node - self.core.set_canvas_node(core_node, node) - - def set_throughputs(self, throughputs_event: ThroughputsEvent): - for iface_throughput in throughputs_event.iface_throughputs: - node_id = iface_throughput.node_id - iface_id = iface_throughput.iface_id - throughput = iface_throughput.throughput - iface_to_edge_id = (node_id, iface_id) - edge = self.core.iface_to_edge.get(iface_to_edge_id) - if edge: - edge.set_throughput(throughput) - - def clear_throughputs(self) -> None: - for edge in self.edges.values(): - edge.clear_throughput() - - def stopped_session(self) -> None: - # clear wireless edges - for edge in self.wireless_edges.values(): - edge.delete() - self.wireless_edges.clear() - self.clear_throughputs() - - def update_wired_edge(self, link: Link) -> None: - token = create_edge_token(link) - edge = self.edges.get(token) - if edge: - edge.link.options = deepcopy(link.options) - edge.draw_link_options() - edge.check_visibility() - - def delete_wired_edge(self, link: Link) -> None: - token = create_edge_token(link) - edge = self.edges.get(token) - if edge: - edge.delete() - - def add_wired_edge(self, src: CanvasNode, dst: CanvasNode, link: Link) -> None: - token = create_edge_token(link) - if token in self.edges and link.options.unidirectional: - edge = self.edges[token] - edge.asymmetric_link = link - edge.redraw() - elif token not in self.edges: - edge = CanvasEdge(self.app, src, dst) - edge.complete(dst, link) - - def add_wireless_edge(self, src: CanvasNode, dst: CanvasNode, link: Link) -> None: - network_id = link.network_id if link.network_id else None - token = create_wireless_token(src.id, dst.id, network_id) - if token in self.wireless_edges: - logger.warning("ignoring link that already exists: %s", link) - return - edge = CanvasWirelessEdge(self.app, src, dst, network_id, token, link) - self.wireless_edges[token] = edge - - def delete_wireless_edge( - self, src: CanvasNode, dst: CanvasNode, link: Link - ) -> None: - network_id = link.network_id if link.network_id else None - token = create_wireless_token(src.id, dst.id, network_id) - if token not in self.wireless_edges: - return - edge = self.wireless_edges.pop(token) - edge.delete() - - def update_wireless_edge( - self, src: CanvasNode, dst: CanvasNode, link: Link - ) -> None: - if not link.label: - return - network_id = link.network_id if link.network_id else None - token = create_wireless_token(src.id, dst.id, network_id) - if token not in self.wireless_edges: - self.add_wireless_edge(src, dst, link) - else: - edge = self.wireless_edges[token] - edge.middle_label_text(link.label) diff --git a/daemon/core/gui/graph/node.py b/daemon/core/gui/graph/node.py index 0cfbf2e9..b4ab3767 100644 --- a/daemon/core/gui/graph/node.py +++ b/daemon/core/gui/graph/node.py @@ -2,29 +2,25 @@ import functools import logging import tkinter as tk from pathlib import Path -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, List, Set import grpc from PIL.ImageTk import PhotoImage -from core.api.grpc.wrappers import Interface, Node, NodeType, ServiceAction -from core.gui import images -from core.gui import nodeutils as nutils -from core.gui import themes +from core.api.grpc.wrappers import Interface, Node, NodeType +from core.gui import nodeutils, themes from core.gui.dialogs.emaneconfig import EmaneConfigDialog from core.gui.dialogs.mobilityconfig import MobilityConfigDialog from core.gui.dialogs.nodeconfig import NodeConfigDialog from core.gui.dialogs.nodeconfigservice import NodeConfigServiceDialog from core.gui.dialogs.nodeservice import NodeServiceDialog -from core.gui.dialogs.wirelessconfig import WirelessConfigDialog from core.gui.dialogs.wlanconfig import WlanConfigDialog from core.gui.frames.node import NodeInfoFrame from core.gui.graph import tags from core.gui.graph.edges import CanvasEdge, CanvasWirelessEdge from core.gui.graph.tooltip import CanvasTooltip -from core.gui.images import ImageEnum - -logger = logging.getLogger(__name__) +from core.gui.images import ImageEnum, Images +from core.gui.nodeutils import ANTENNA_SIZE, NodeUtils if TYPE_CHECKING: from core.gui.app import Application @@ -35,16 +31,10 @@ NODE_TEXT_OFFSET: int = 5 class CanvasNode: def __init__( - self, - app: "Application", - canvas: "CanvasGraph", - x: float, - y: float, - core_node: Node, - image: PhotoImage, + self, app: "Application", x: float, y: float, core_node: Node, image: PhotoImage ): self.app: "Application" = app - self.canvas: "CanvasGraph" = canvas + self.canvas: "CanvasGraph" = app.canvas self.image: PhotoImage = image self.core_node: Node = core_node self.id: int = self.canvas.create_image( @@ -59,22 +49,18 @@ class CanvasNode: tags=tags.NODE_LABEL, font=self.app.icon_text_font, fill="#0000CD", - state=self.app.manager.show_node_labels.state(), + state=self.canvas.show_node_labels.state(), ) self.tooltip: CanvasTooltip = CanvasTooltip(self.canvas) - self.edges: set[CanvasEdge] = set() - self.ifaces: dict[int, Interface] = {} - self.wireless_edges: set[CanvasWirelessEdge] = set() - self.antennas: list[int] = [] - self.antenna_images: dict[int, PhotoImage] = {} - self.hidden: bool = False + self.edges: Set[CanvasEdge] = set() + self.ifaces: Dict[int, Interface] = {} + self.wireless_edges: Set[CanvasWirelessEdge] = set() + self.antennas: List[int] = [] + self.antenna_images: Dict[int, PhotoImage] = {} self.setup_bindings() self.context: tk.Menu = tk.Menu(self.canvas) themes.style_menu(self.context) - def position(self) -> tuple[int, int]: - return self.canvas.coords(self.id) - def next_iface_id(self) -> int: i = 0 while i in self.ifaces: @@ -89,15 +75,15 @@ class CanvasNode: self.canvas.tag_bind(self.id, "", self.show_info) def delete(self) -> None: - logger.debug("Delete canvas node for %s", self.core_node) + logging.debug("Delete canvas node for %s", self.core_node) self.canvas.delete(self.id) self.canvas.delete(self.text_id) self.delete_antennas() def add_antenna(self) -> None: - x, y = self.position() + x, y = self.canvas.coords(self.id) offset = len(self.antennas) * 8 * self.app.app_scale - img = self.app.get_enum_icon(ImageEnum.ANTENNA, width=images.ANTENNA_SIZE) + img = self.app.get_icon(ImageEnum.ANTENNA, ANTENNA_SIZE) antenna_id = self.canvas.create_image( x - 16 + offset, y - int(23 * self.app.app_scale), @@ -112,7 +98,7 @@ class CanvasNode: """ delete one antenna """ - logger.debug("Delete an antenna on %s", self.core_node.name) + logging.debug("Delete an antenna on %s", self.core_node.name) if self.antennas: antenna_id = self.antennas.pop() self.canvas.delete(antenna_id) @@ -122,7 +108,7 @@ class CanvasNode: """ delete all antennas """ - logger.debug("Remove all antennas for %s", self.core_node.name) + logging.debug("Remove all antennas for %s", self.core_node.name) for antenna_id in self.antennas: self.canvas.delete(antenna_id) self.antennas.clear() @@ -153,14 +139,15 @@ class CanvasNode: def move(self, x: float, y: float) -> None: x, y = self.canvas.get_scaled_coords(x, y) - current_x, current_y = self.position() + current_x, current_y = self.canvas.coords(self.id) x_offset = x - current_x y_offset = y - current_y self.motion(x_offset, y_offset, update=False) def motion(self, x_offset: float, y_offset: float, update: bool = True) -> None: - original_position = self.position() + original_position = self.canvas.coords(self.id) self.canvas.move(self.id, x_offset, y_offset) + pos = self.canvas.coords(self.id) # check new position bbox = self.canvas.bbox(self.id) @@ -178,12 +165,11 @@ class CanvasNode: # move edges for edge in self.edges: - edge.move_node(self) + edge.move_node(self.id, pos) for edge in self.wireless_edges: - edge.move_node(self) + edge.move_node(self.id, pos) # set actual coords for node and update core is running - pos = self.position() real_x, real_y = self.canvas.get_actual_coords(*pos) self.core_node.position.x = real_x self.core_node.position.y = real_y @@ -193,7 +179,7 @@ class CanvasNode: def on_enter(self, event: tk.Event) -> None: is_runtime = self.app.core.is_runtime() has_observer = self.app.core.observer is not None - is_container = nutils.is_container(self.core_node) + is_container = NodeUtils.is_container_node(self.core_node.type) if is_runtime and has_observer and is_container: self.tooltip.text.set("waiting...") self.tooltip.on_enter(event) @@ -208,7 +194,7 @@ class CanvasNode: def double_click(self, event: tk.Event) -> None: if self.app.core.is_runtime(): - if nutils.is_container(self.core_node): + if NodeUtils.is_container_node(self.core_node.type): self.canvas.core.launch_terminal(self.core_node.id) else: self.show_config() @@ -220,7 +206,6 @@ class CanvasNode: # clear existing menu self.context.delete(0, tk.END) is_wlan = self.core_node.type == NodeType.WIRELESS_LAN - is_wireless = self.core_node.type == NodeType.WIRELESS is_emane = self.core_node.type == NodeType.EMANE is_mobility = is_wlan or is_emane if self.app.core.is_runtime(): @@ -233,39 +218,17 @@ class CanvasNode: self.context.add_command( label="WLAN Config", command=self.show_wlan_config ) - if is_wireless: - self.context.add_command( - label="Wireless Config", command=self.show_wireless_config - ) if is_mobility and self.core_node.id in self.app.core.mobility_players: self.context.add_command( label="Mobility Player", command=self.show_mobility_player ) - if nutils.is_container(self.core_node): - services_menu = tk.Menu(self.context) - for service in sorted(self.core_node.config_services): - service_menu = tk.Menu(services_menu) - themes.style_menu(service_menu) - start_func = functools.partial(self.start_service, service) - service_menu.add_command(label="Start", command=start_func) - stop_func = functools.partial(self.stop_service, service) - service_menu.add_command(label="Stop", command=stop_func) - restart_func = functools.partial(self.restart_service, service) - service_menu.add_command(label="Restart", command=restart_func) - validate_func = functools.partial(self.validate_service, service) - service_menu.add_command(label="Validate", command=validate_func) - services_menu.add_cascade(label=service, menu=service_menu) - themes.style_menu(services_menu) - self.context.add_cascade(label="Services", menu=services_menu) else: self.context.add_command(label="Configure", command=self.show_config) - if nutils.is_container(self.core_node): + if NodeUtils.is_container_node(self.core_node.type): + self.context.add_command(label="Services", command=self.show_services) self.context.add_command( label="Config Services", command=self.show_config_services ) - self.context.add_command( - label="Services (Deprecated)", command=self.show_services - ) if is_emane: self.context.add_command( label="EMANE Config", command=self.show_emane_config @@ -274,55 +237,35 @@ class CanvasNode: self.context.add_command( label="WLAN Config", command=self.show_wlan_config ) - if is_wireless: - self.context.add_command( - label="Wireless Config", command=self.show_wireless_config - ) if is_mobility: self.context.add_command( label="Mobility Config", command=self.show_mobility_config ) - if nutils.is_wireless(self.core_node): + if NodeUtils.is_wireless_node(self.core_node.type): self.context.add_command( label="Link To Selected", command=self.wireless_link_selected ) - - link_menu = tk.Menu(self.context) - for canvas in self.app.manager.all(): - canvas_menu = tk.Menu(link_menu) - themes.style_menu(canvas_menu) - for node in canvas.nodes.values(): - if not self.is_linkable(node): - continue - func_link = functools.partial(self.click_link, node) - canvas_menu.add_command( - label=node.core_node.name, command=func_link - ) - link_menu.add_cascade(label=f"Canvas {canvas.id}", menu=canvas_menu) - themes.style_menu(link_menu) - self.context.add_cascade(label="Link", menu=link_menu) - unlink_menu = tk.Menu(self.context) for edge in self.edges: - other_node = edge.other_node(self) - other_iface = edge.other_iface(self) - label = other_node.core_node.name - if other_iface: - iface_label = other_iface.id - if other_iface.name: - iface_label = other_iface.name - label = f"{label}:{iface_label}" + link = edge.link + if self.id == edge.src: + other_id = edge.dst + other_iface = link.iface2.name if link.iface2 else None + else: + other_id = edge.src + other_iface = link.iface1.name if link.iface1 else None + other_node = self.canvas.nodes[other_id] + other_name = other_node.core_node.name + label = f"{other_name}:{other_iface}" if other_iface else other_name func_unlink = functools.partial(self.click_unlink, edge) unlink_menu.add_command(label=label, command=func_unlink) themes.style_menu(unlink_menu) self.context.add_cascade(label="Unlink", menu=unlink_menu) - edit_menu = tk.Menu(self.context) themes.style_menu(edit_menu) edit_menu.add_command(label="Cut", command=self.click_cut) edit_menu.add_command(label="Copy", command=self.canvas_copy) edit_menu.add_command(label="Delete", command=self.canvas_delete) - edit_menu.add_command(label="Hide", command=self.click_hide) self.context.add_cascade(label="Edit", menu=edit_menu) self.context.tk_popup(event.x_root, event.y_root) @@ -330,18 +273,10 @@ class CanvasNode: self.canvas_copy() self.canvas_delete() - def click_hide(self) -> None: - self.canvas.clear_selection() - self.hide() - def click_unlink(self, edge: CanvasEdge) -> None: - edge.delete() + self.canvas.delete_edge(edge) self.app.default_info() - def click_link(self, node: "CanvasNode") -> None: - edge = CanvasEdge(self.app, self, node) - edge.complete(node) - def canvas_delete(self) -> None: self.canvas.clear_selection() self.canvas.select_object(self.id) @@ -350,16 +285,12 @@ class CanvasNode: def canvas_copy(self) -> None: self.canvas.clear_selection() self.canvas.select_object(self.id) - self.canvas.copy_selected() + self.canvas.copy() def show_config(self) -> None: dialog = NodeConfigDialog(self.app, self) dialog.show() - def show_wireless_config(self) -> None: - dialog = WirelessConfigDialog(self.app, self) - dialog.show() - def show_wlan_config(self) -> None: dialog = WlanConfigDialog(self.app, self) if not dialog.has_error: @@ -389,11 +320,15 @@ class CanvasNode: def has_emane_link(self, iface_id: int) -> Node: result = None for edge in self.edges: - other_node = edge.other_node(self) - iface = edge.iface(self) - edge_iface_id = iface.id if iface else None + if self.id == edge.src: + other_id = edge.dst + edge_iface_id = edge.link.iface1.id + else: + other_id = edge.src + edge_iface_id = edge.link.iface2.id if edge_iface_id != iface_id: continue + other_node = self.canvas.nodes[other_id] if other_node.core_node.type == NodeType.EMANE: result = other_node.core_node break @@ -409,7 +344,7 @@ class CanvasNode: def scale_antennas(self) -> None: for i in range(len(self.antennas)): antenna_id = self.antennas[i] - image = self.app.get_enum_icon(ImageEnum.ANTENNA, width=images.ANTENNA_SIZE) + image = self.app.get_icon(ImageEnum.ANTENNA, ANTENNA_SIZE) self.canvas.itemconfig(antenna_id, image=image) self.antenna_images[antenna_id] = image node_x, node_y = self.canvas.coords(self.id) @@ -420,169 +355,14 @@ class CanvasNode: def update_icon(self, icon_path: str) -> None: if not Path(icon_path).exists(): - logger.error(f"node icon does not exist: {icon_path}") + logging.error(f"node icon does not exist: {icon_path}") return self.core_node.icon = icon_path - self.image = images.from_file(icon_path, width=images.NODE_SIZE) + self.image = Images.create(icon_path, nodeutils.ICON_SIZE) self.canvas.itemconfig(self.id, image=self.image) - def is_linkable(self, node: "CanvasNode") -> bool: - # cannot link to self - if self == node: - return False - # rj45 nodes can only support one link - if nutils.is_rj45(self.core_node) and self.edges: - return False - if nutils.is_rj45(node.core_node) and node.edges: - return False - # only 1 link between bridge based nodes - is_src_bridge = nutils.is_bridge(self.core_node) - is_dst_bridge = nutils.is_bridge(node.core_node) - common_links = self.edges & node.edges - if all([is_src_bridge, is_dst_bridge, common_links]): - return False - # valid link - return True - def hide(self) -> None: - self.hidden = True + self.canvas.addtag_withtag(tags.HIDDEN, self.id) + self.canvas.addtag_withtag(tags.HIDDEN, self.text_id) self.canvas.itemconfig(self.id, state=tk.HIDDEN) self.canvas.itemconfig(self.text_id, state=tk.HIDDEN) - for antenna in self.antennas: - self.canvas.itemconfig(antenna, state=tk.HIDDEN) - for edge in self.edges: - if not edge.hidden: - edge.hide() - for edge in self.wireless_edges: - if not edge.hidden: - edge.hide() - - def show(self) -> None: - self.hidden = False - self.canvas.itemconfig(self.id, state=tk.NORMAL) - state = self.app.manager.show_node_labels.state() - self.set_label(state) - for antenna in self.antennas: - self.canvas.itemconfig(antenna, state=tk.NORMAL) - for edge in self.edges: - other_node = edge.other_node(self) - if edge.hidden and not other_node.hidden: - edge.show() - for edge in self.wireless_edges: - other_node = edge.other_node(self) - if edge.hidden and not other_node.hidden: - edge.show() - - def set_label(self, state: str) -> None: - self.canvas.itemconfig(self.text_id, state=state) - - def _service_action(self, service: str, action: ServiceAction) -> None: - session_id = self.app.core.session.id - try: - result = self.app.core.client.config_service_action( - session_id, self.core_node.id, service, action - ) - if not result: - self.app.show_error("Service Action Error", "Action Failed!") - except grpc.RpcError as e: - self.app.show_grpc_exception("Service Error", e) - - def start_service(self, service: str) -> None: - self._service_action(service, ServiceAction.START) - - def stop_service(self, service: str) -> None: - self._service_action(service, ServiceAction.STOP) - - def restart_service(self, service: str) -> None: - self._service_action(service, ServiceAction.RESTART) - - def validate_service(self, service: str) -> None: - self._service_action(service, ServiceAction.VALIDATE) - - def is_wireless(self) -> bool: - return nutils.is_wireless(self.core_node) - - -class ShadowNode: - def __init__( - self, app: "Application", canvas: "CanvasGraph", node: "CanvasNode" - ) -> None: - self.app: "Application" = app - self.canvas: "CanvasGraph" = canvas - self.node: "CanvasNode" = node - self.id: Optional[int] = None - self.text_id: Optional[int] = None - self.image: PhotoImage = self.app.get_enum_icon( - ImageEnum.SHADOW, width=images.NODE_SIZE - ) - self.draw() - self.setup_bindings() - - def setup_bindings(self) -> None: - self.canvas.tag_bind(self.id, "", self.node.double_click) - self.canvas.tag_bind(self.id, "", self.node.on_enter) - self.canvas.tag_bind(self.id, "", self.node.on_leave) - self.canvas.tag_bind(self.id, "", self.node.show_context) - self.canvas.tag_bind(self.id, "", self.node.show_info) - - def draw(self) -> None: - x, y = self.node.position() - self.id: int = self.canvas.create_image( - x, y, anchor=tk.CENTER, image=self.image, tags=tags.NODE - ) - self.text_id = self.canvas.create_text( - x, - y + 20, - text=f"{self.node.get_label()} [{self.node.canvas.id}]", - tags=tags.NODE_LABEL, - font=self.app.icon_text_font, - fill="#0000CD", - state=self.app.manager.show_node_labels.state(), - justify=tk.CENTER, - ) - self.canvas.shadow_nodes[self.id] = self - self.canvas.shadow_core_nodes[self.node.core_node.id] = self - - def position(self) -> tuple[int, int]: - return self.canvas.coords(self.id) - - def should_delete(self) -> bool: - for edge in self.node.edges: - other_node = edge.other_node(self.node) - if not other_node.is_wireless() and other_node.canvas == self.canvas: - return False - return True - - def motion(self, x_offset, y_offset) -> None: - original_position = self.position() - self.canvas.move(self.id, x_offset, y_offset) - - # check new position - bbox = self.canvas.bbox(self.id) - if not self.canvas.valid_position(*bbox): - self.canvas.coords(self.id, original_position) - return - - # move text and selection box - self.canvas.move(self.text_id, x_offset, y_offset) - self.canvas.move_selection(self.id, x_offset, y_offset) - - # move edges - for edge in self.node.edges: - edge.move_shadow(self) - for edge in self.node.wireless_edges: - edge.move_shadow(self) - - def delete(self): - self.canvas.shadow_nodes.pop(self.id, None) - self.canvas.shadow_core_nodes.pop(self.node.core_node.id, None) - self.canvas.delete(self.id) - self.canvas.delete(self.text_id) - - def hide(self) -> None: - self.canvas.itemconfig(self.id, state=tk.HIDDEN) - self.canvas.itemconfig(self.text_id, state=tk.HIDDEN) - - def show(self) -> None: - self.canvas.itemconfig(self.id, state=tk.NORMAL) - self.canvas.itemconfig(self.text_id, state=tk.NORMAL) diff --git a/daemon/core/gui/graph/shape.py b/daemon/core/gui/graph/shape.py index 5f243fdf..36298655 100644 --- a/daemon/core/gui/graph/shape.py +++ b/daemon/core/gui/graph/shape.py @@ -1,12 +1,10 @@ import logging -from typing import TYPE_CHECKING, Any, Optional, Union +from typing import TYPE_CHECKING, Dict, List, Optional, Union from core.gui.dialogs.shapemod import ShapeDialog from core.gui.graph import tags from core.gui.graph.shapeutils import ShapeType -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application from core.gui.graph.graph import CanvasGraph @@ -71,31 +69,6 @@ class Shape: self.shape_data = data self.draw() - @classmethod - def from_metadata(cls, app: "Application", config: dict[str, Any]) -> None: - shape_type = config["type"] - try: - shape_type = ShapeType(shape_type) - coords = config["iconcoords"] - data = AnnotationData( - config["label"], - config["fontfamily"], - config["fontsize"], - config["labelcolor"], - config["color"], - config["border"], - config["width"], - config["bold"], - config["italic"], - config["underline"], - ) - canvas_id = config.get("canvas", 1) - canvas = app.manager.get(canvas_id) - shape = Shape(app, canvas, shape_type, *coords, data=data) - canvas.shapes[shape.id] = shape - except ValueError: - logger.exception("unknown shape: %s", shape_type) - def draw(self) -> None: if self.created: dash = None @@ -112,7 +85,7 @@ class Shape: fill=self.shape_data.fill_color, outline=self.shape_data.border_color, width=self.shape_data.border_width, - state=self.app.manager.show_annotations.state(), + state=self.canvas.show_annotations.state(), ) self.draw_shape_text() elif self.shape_type == ShapeType.RECTANGLE: @@ -126,7 +99,7 @@ class Shape: fill=self.shape_data.fill_color, outline=self.shape_data.border_color, width=self.shape_data.border_width, - state=self.app.manager.show_annotations.state(), + state=self.canvas.show_annotations.state(), ) self.draw_shape_text() elif self.shape_type == ShapeType.TEXT: @@ -138,13 +111,13 @@ class Shape: text=self.shape_data.text, fill=self.shape_data.text_color, font=font, - state=self.app.manager.show_annotations.state(), + state=self.canvas.show_annotations.state(), ) else: - logger.error("unknown shape type: %s", self.shape_type) + logging.error("unknown shape type: %s", self.shape_type) self.created = True - def get_font(self) -> list[Union[int, str]]: + def get_font(self) -> List[Union[int, str]]: font = [self.shape_data.font, self.shape_data.font_size] if self.shape_data.bold: font.append("bold") @@ -166,7 +139,7 @@ class Shape: text=self.shape_data.text, fill=self.shape_data.text_color, font=font, - state=self.app.manager.show_annotations.state(), + state=self.canvas.show_annotations.state(), ) def shape_motion(self, x1: float, y1: float) -> None: @@ -194,11 +167,11 @@ class Shape: self.canvas.move(self.text_id, x_offset, y_offset) def delete(self) -> None: - logger.debug("Delete shape, id(%s)", self.id) + logging.debug("Delete shape, id(%s)", self.id) self.canvas.delete(self.id) self.canvas.delete(self.text_id) - def metadata(self) -> dict[str, Union[str, int, bool]]: + def metadata(self) -> Dict[str, Union[str, int, bool]]: coords = self.canvas.coords(self.id) # update coords to actual positions if len(coords) == 4: @@ -211,7 +184,6 @@ class Shape: x1, y1 = self.canvas.get_actual_coords(x1, y1) coords = (x1, y1) return { - "canvas": self.canvas.id, "type": self.shape_type.value, "iconcoords": coords, "label": self.shape_data.text, diff --git a/daemon/core/gui/graph/shapeutils.py b/daemon/core/gui/graph/shapeutils.py index ab82ef76..2b62a46c 100644 --- a/daemon/core/gui/graph/shapeutils.py +++ b/daemon/core/gui/graph/shapeutils.py @@ -1,4 +1,5 @@ import enum +from typing import Set class ShapeType(enum.Enum): @@ -8,7 +9,7 @@ class ShapeType(enum.Enum): TEXT = "text" -SHAPES: set[ShapeType] = {ShapeType.OVAL, ShapeType.RECTANGLE} +SHAPES: Set[ShapeType] = {ShapeType.OVAL, ShapeType.RECTANGLE} def is_draw_shape(shape_type: ShapeType) -> bool: diff --git a/daemon/core/gui/graph/tags.py b/daemon/core/gui/graph/tags.py index cb1ffc15..803b969e 100644 --- a/daemon/core/gui/graph/tags.py +++ b/daemon/core/gui/graph/tags.py @@ -1,3 +1,5 @@ +from typing import List + ANNOTATION: str = "annotation" GRIDLINE: str = "gridline" SHAPE: str = "shape" @@ -13,7 +15,7 @@ WALLPAPER: str = "wallpaper" SELECTION: str = "selectednodes" MARKER: str = "marker" HIDDEN: str = "hidden" -ORGANIZE_TAGS: list[str] = [ +ORGANIZE_TAGS: List[str] = [ WALLPAPER, GRIDLINE, SHAPE, @@ -27,7 +29,7 @@ ORGANIZE_TAGS: list[str] = [ SELECTION, MARKER, ] -RESET_TAGS: list[str] = [ +RESET_TAGS: List[str] = [ EDGE, NODE, NODE_LABEL, diff --git a/daemon/core/gui/graph/tooltip.py b/daemon/core/gui/graph/tooltip.py index b820abec..6e4aa62f 100644 --- a/daemon/core/gui/graph/tooltip.py +++ b/daemon/core/gui/graph/tooltip.py @@ -1,6 +1,6 @@ import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Optional, Tuple from core.gui.themes import Styles @@ -27,9 +27,9 @@ class CanvasTooltip: self, canvas: "CanvasGraph", *, - pad: tuple[int, int, int, int] = (5, 3, 5, 3), + pad: Tuple[int, int, int, int] = (5, 3, 5, 3), waittime: int = 400, - wraplength: int = 600, + wraplength: int = 600 ) -> None: # in miliseconds, originally 500 self.waittime: int = waittime @@ -37,7 +37,7 @@ class CanvasTooltip: self.wraplength: int = wraplength self.canvas: "CanvasGraph" = canvas self.text: tk.StringVar = tk.StringVar() - self.pad: tuple[int, int, int, int] = pad + self.pad: Tuple[int, int, int, int] = pad self.id: Optional[str] = None self.tw: Optional[tk.Toplevel] = None @@ -63,8 +63,8 @@ class CanvasTooltip: canvas: "CanvasGraph", label: ttk.Label, *, - tip_delta: tuple[int, int] = (10, 5), - pad: tuple[int, int, int, int] = (5, 3, 5, 3), + tip_delta: Tuple[int, int] = (10, 5), + pad: Tuple[int, int, int, int] = (5, 3, 5, 3) ): c = canvas s_width, s_height = c.winfo_screenwidth(), c.winfo_screenheight() @@ -112,7 +112,7 @@ class CanvasTooltip: ) label.grid(padx=(pad[0], pad[2]), pady=(pad[1], pad[3]), sticky=tk.NSEW) x, y = tip_pos_calculator(canvas, label, pad=pad) - self.tw.wm_geometry(f"+{x:d}+{y:d}") + self.tw.wm_geometry("+%d+%d" % (x, y)) def hide(self) -> None: if self.tw: diff --git a/daemon/core/gui/images.py b/daemon/core/gui/images.py index 070137fb..66d92d30 100644 --- a/daemon/core/gui/images.py +++ b/daemon/core/gui/images.py @@ -1,46 +1,53 @@ from enum import Enum -from typing import Optional +from tkinter import messagebox +from typing import Dict, Optional, Tuple from PIL import Image from PIL.ImageTk import PhotoImage -from core.api.grpc.wrappers import Node, NodeType +from core.api.grpc.wrappers import NodeType from core.gui.appconfig import LOCAL_ICONS_PATH -NODE_SIZE: int = 48 -ANTENNA_SIZE: int = 32 -BUTTON_SIZE: int = 16 -ERROR_SIZE: int = 24 -DIALOG_SIZE: int = 16 -IMAGES: dict[str, str] = {} +class Images: + images: Dict[str, str] = {} -def load_all() -> None: - for image in LOCAL_ICONS_PATH.glob("*"): + @classmethod + def create(cls, file_path: str, width: int, height: int = None) -> PhotoImage: + if height is None: + height = width + image = Image.open(file_path) + image = image.resize((width, height), Image.ANTIALIAS) + return PhotoImage(image) + + @classmethod + def load_all(cls) -> None: + for image in LOCAL_ICONS_PATH.glob("*"): + cls.images[image.stem] = str(image) + + @classmethod + def get(cls, image_enum: Enum, width: int, height: int = None) -> PhotoImage: + file_path = cls.images[image_enum.value] + return cls.create(file_path, width, height) + + @classmethod + def get_with_image_file( + cls, stem: str, width: int, height: int = None + ) -> PhotoImage: + file_path = cls.images[stem] + return cls.create(file_path, width, height) + + @classmethod + def get_custom(cls, name: str, width: int, height: int = None) -> PhotoImage: try: - ImageEnum(image.stem) - IMAGES[image.stem] = str(image) - except ValueError: - pass - - -def from_file( - file_path: str, *, width: int, height: int = None, scale: float = 1.0 -) -> PhotoImage: - if height is None: - height = width - width = int(width * scale) - height = int(height * scale) - image = Image.open(file_path) - image = image.resize((width, height), Image.ANTIALIAS) - return PhotoImage(image) - - -def from_enum( - image_enum: "ImageEnum", *, width: int, height: int = None, scale: float = 1.0 -) -> PhotoImage: - file_path = IMAGES[image_enum.value] - return from_file(file_path, width=width, height=height, scale=scale) + file_path = cls.images[name] + return cls.create(file_path, width, height) + except KeyError: + messagebox.showwarning( + "Missing image file", + f"{name}.png is missing at daemon/core/gui/data/icons, drop image " + f"file at daemon/core/gui/data/icons and restart the gui", + ) class ImageEnum(Enum): @@ -53,7 +60,6 @@ class ImageEnum(Enum): LINK = "link" HUB = "hub" WLAN = "wlan" - WIRELESS = "wireless" EMANE = "emane" RJ45 = "rj45" TUNNEL = "tunnel" @@ -78,38 +84,31 @@ class ImageEnum(Enum): EDITDELETE = "edit-delete" ANTENNA = "antenna" DOCKER = "docker" - PODMAN = "podman" LXC = "lxc" ALERT = "alert" DELETE = "delete" SHUTDOWN = "shutdown" CANCEL = "cancel" ERROR = "error" - SHADOW = "shadow" -TYPE_MAP: dict[tuple[NodeType, str], ImageEnum] = { - (NodeType.DEFAULT, "router"): ImageEnum.ROUTER, - (NodeType.DEFAULT, "PC"): ImageEnum.PC, - (NodeType.DEFAULT, "host"): ImageEnum.HOST, - (NodeType.DEFAULT, "mdr"): ImageEnum.MDR, - (NodeType.DEFAULT, "prouter"): ImageEnum.PROUTER, - (NodeType.HUB, None): ImageEnum.HUB, - (NodeType.SWITCH, None): ImageEnum.SWITCH, - (NodeType.WIRELESS_LAN, None): ImageEnum.WLAN, - (NodeType.WIRELESS, None): ImageEnum.WIRELESS, - (NodeType.EMANE, None): ImageEnum.EMANE, - (NodeType.RJ45, None): ImageEnum.RJ45, - (NodeType.TUNNEL, None): ImageEnum.TUNNEL, - (NodeType.DOCKER, None): ImageEnum.DOCKER, - (NodeType.PODMAN, None): ImageEnum.PODMAN, - (NodeType.LXC, None): ImageEnum.LXC, -} +class TypeToImage: + type_to_image: Dict[Tuple[NodeType, str], ImageEnum] = { + (NodeType.DEFAULT, "router"): ImageEnum.ROUTER, + (NodeType.DEFAULT, "PC"): ImageEnum.PC, + (NodeType.DEFAULT, "host"): ImageEnum.HOST, + (NodeType.DEFAULT, "mdr"): ImageEnum.MDR, + (NodeType.DEFAULT, "prouter"): ImageEnum.PROUTER, + (NodeType.HUB, ""): ImageEnum.HUB, + (NodeType.SWITCH, ""): ImageEnum.SWITCH, + (NodeType.WIRELESS_LAN, ""): ImageEnum.WLAN, + (NodeType.EMANE, ""): ImageEnum.EMANE, + (NodeType.RJ45, ""): ImageEnum.RJ45, + (NodeType.TUNNEL, ""): ImageEnum.TUNNEL, + (NodeType.DOCKER, ""): ImageEnum.DOCKER, + (NodeType.LXC, ""): ImageEnum.LXC, + } - -def from_node(node: Node, *, scale: float) -> Optional[PhotoImage]: - image = None - image_enum = TYPE_MAP.get((node.type, node.model)) - if image_enum: - image = from_enum(image_enum, width=NODE_SIZE, scale=scale) - return image + @classmethod + def get(cls, node_type, model) -> Optional[ImageEnum]: + return cls.type_to_image.get((node_type, model)) diff --git a/daemon/core/gui/interface.py b/daemon/core/gui/interface.py index 9ebea3c1..4c5f5978 100644 --- a/daemon/core/gui/interface.py +++ b/daemon/core/gui/interface.py @@ -1,24 +1,16 @@ import logging -from typing import TYPE_CHECKING, Any, Optional +from typing import TYPE_CHECKING, Any, Dict, List, Optional, Set, Tuple import netaddr from netaddr import EUI, IPNetwork -from core.api.grpc.wrappers import Interface, Link, LinkType, Node -from core.gui import nodeutils as nutils -from core.gui.graph.edges import CanvasEdge +from core.api.grpc.wrappers import Interface, Link, Node from core.gui.graph.node import CanvasNode - -logger = logging.getLogger(__name__) +from core.gui.nodeutils import NodeUtils if TYPE_CHECKING: from core.gui.app import Application -IP4_MASK: int = 24 -IP6_MASK: int = 64 -WIRELESS_IP4_MASK: int = 32 -WIRELESS_IP6_MASK: int = 128 - def get_index(iface: Interface) -> Optional[int]: if not iface.ip4: @@ -43,7 +35,7 @@ class Subnets: def __hash__(self) -> int: return hash(self.key()) - def key(self) -> tuple[IPNetwork, IPNetwork]: + def key(self) -> Tuple[IPNetwork, IPNetwork]: return self.ip4, self.ip6 def next(self) -> "Subnets": @@ -55,24 +47,25 @@ class InterfaceManager: self.app: "Application" = app ip4 = self.app.guiconfig.ips.ip4 ip6 = self.app.guiconfig.ips.ip6 - self.ip4_subnets: IPNetwork = IPNetwork(f"{ip4}/{IP4_MASK}") - self.ip6_subnets: IPNetwork = IPNetwork(f"{ip6}/{IP6_MASK}") + self.ip4_mask: int = 24 + self.ip6_mask: int = 64 + self.ip4_subnets: IPNetwork = IPNetwork(f"{ip4}/{self.ip4_mask}") + self.ip6_subnets: IPNetwork = IPNetwork(f"{ip6}/{self.ip6_mask}") mac = self.app.guiconfig.mac self.mac: EUI = EUI(mac, dialect=netaddr.mac_unix_expanded) self.current_mac: Optional[EUI] = None self.current_subnets: Optional[Subnets] = None - self.used_subnets: dict[tuple[IPNetwork, IPNetwork], Subnets] = {} - self.used_macs: set[str] = set() + self.used_subnets: Dict[Tuple[IPNetwork, IPNetwork], Subnets] = {} def update_ips(self, ip4: str, ip6: str) -> None: self.reset() - self.ip4_subnets = IPNetwork(f"{ip4}/{IP4_MASK}") - self.ip6_subnets = IPNetwork(f"{ip6}/{IP6_MASK}") + self.ip4_subnets = IPNetwork(f"{ip4}/{self.ip4_mask}") + self.ip6_subnets = IPNetwork(f"{ip6}/{self.ip6_mask}") + + def reset_mac(self) -> None: + self.current_mac = self.mac def next_mac(self) -> str: - while str(self.current_mac) in self.used_macs: - value = self.current_mac.value + 1 - self.current_mac = EUI(value, dialect=netaddr.mac_unix_expanded) mac = str(self.current_mac) value = self.current_mac.value + 1 self.current_mac = EUI(value, dialect=netaddr.mac_unix_expanded) @@ -91,7 +84,7 @@ class InterfaceManager: self.current_subnets = None self.used_subnets.clear() - def removed(self, links: list[Link]) -> None: + def removed(self, links: List[Link]) -> None: # get remaining subnets remaining_subnets = set() for edge in self.app.core.links.values(): @@ -121,16 +114,7 @@ class InterfaceManager: subnets.used_indexes.discard(index) self.current_subnets = None - def set_macs(self, links: list[Link]) -> None: - self.current_mac = self.mac - self.used_macs.clear() - for link in links: - if link.iface1: - self.used_macs.add(link.iface1.mac) - if link.iface2: - self.used_macs.add(link.iface2.mac) - - def joined(self, links: list[Link]) -> None: + def joined(self, links: List[Link]) -> None: ifaces = [] for link in links: if link.iface1: @@ -149,7 +133,7 @@ class InterfaceManager: self.used_subnets[subnets.key()] = subnets def next_index(self, node: Node) -> int: - if nutils.is_router(node): + if NodeUtils.is_router_node(node): index = 1 else: index = 20 @@ -160,26 +144,19 @@ class InterfaceManager: index += 1 return index - def get_ips(self, node: Node) -> [Optional[str], Optional[str]]: - enable_ip4 = self.app.guiconfig.ips.enable_ip4 - enable_ip6 = self.app.guiconfig.ips.enable_ip6 - ip4, ip6 = None, None - if not enable_ip4 and not enable_ip6: - return ip4, ip6 + def get_ips(self, node: Node) -> [str, str]: index = self.next_index(node) - if enable_ip4: - ip4 = str(self.current_subnets.ip4[index]) - if enable_ip6: - ip6 = str(self.current_subnets.ip6[index]) - return ip4, ip6 + ip4 = self.current_subnets.ip4[index] + ip6 = self.current_subnets.ip6[index] + return str(ip4), str(ip6) def get_subnets(self, iface: Interface) -> Subnets: ip4_subnet = self.ip4_subnets if iface.ip4: - ip4_subnet = IPNetwork(f"{iface.ip4}/{IP4_MASK}").cidr + ip4_subnet = IPNetwork(f"{iface.ip4}/{iface.ip4_mask}").cidr ip6_subnet = self.ip6_subnets if iface.ip6: - ip6_subnet = IPNetwork(f"{iface.ip6}/{IP6_MASK}").cidr + ip6_subnet = IPNetwork(f"{iface.ip6}/{iface.ip6_mask}").cidr subnets = Subnets(ip4_subnet, ip6_subnet) return self.used_subnets.get(subnets.key(), subnets) @@ -188,8 +165,8 @@ class InterfaceManager: ) -> None: src_node = canvas_src_node.core_node dst_node = canvas_dst_node.core_node - is_src_container = nutils.is_container(src_node) - is_dst_container = nutils.is_container(dst_node) + is_src_container = NodeUtils.is_container_node(src_node.type) + is_dst_container = NodeUtils.is_container_node(dst_node.type) if is_src_container and is_dst_container: self.current_subnets = self.next_subnets() elif is_src_container and not is_dst_container: @@ -205,22 +182,25 @@ class InterfaceManager: else: self.current_subnets = self.next_subnets() else: - logger.info("ignoring subnet change for link between network nodes") + logging.info("ignoring subnet change for link between network nodes") def find_subnets( - self, canvas_node: CanvasNode, visited: set[int] = None + self, canvas_node: CanvasNode, visited: Set[int] = None ) -> Optional[IPNetwork]: - logger.info("finding subnet for node: %s", canvas_node.core_node.name) + logging.info("finding subnet for node: %s", canvas_node.core_node.name) + canvas = self.app.canvas subnets = None if not visited: visited = set() visited.add(canvas_node.core_node.id) for edge in canvas_node.edges: + src_node = canvas.nodes[edge.src] + dst_node = canvas.nodes[edge.dst] iface = edge.link.iface1 - check_node = edge.src - if edge.src == canvas_node: + check_node = src_node + if src_node == canvas_node: iface = edge.link.iface2 - check_node = edge.dst + check_node = dst_node if check_node.core_node.id in visited: continue visited.add(check_node.core_node.id) @@ -229,55 +209,6 @@ class InterfaceManager: else: subnets = self.find_subnets(check_node, visited) if subnets: - logger.info("found subnets: %s", subnets) + logging.info("found subnets: %s", subnets) break return subnets - - def create_link(self, edge: CanvasEdge) -> Link: - """ - Create core link for a given edge based on src/dst nodes. - """ - src_node = edge.src.core_node - dst_node = edge.dst.core_node - self.determine_subnets(edge.src, edge.dst) - src_iface = None - if nutils.is_iface_node(src_node): - src_iface = self.create_iface(edge.src, edge.linked_wireless) - dst_iface = None - if nutils.is_iface_node(dst_node): - dst_iface = self.create_iface(edge.dst, edge.linked_wireless) - link = Link( - type=LinkType.WIRED, - node1_id=src_node.id, - node2_id=dst_node.id, - iface1=src_iface, - iface2=dst_iface, - ) - logger.info("added link between %s and %s", src_node.name, dst_node.name) - return link - - def create_iface(self, canvas_node: CanvasNode, wireless_link: bool) -> Interface: - node = canvas_node.core_node - if nutils.is_bridge(node): - iface_id = canvas_node.next_iface_id() - iface = Interface(id=iface_id) - else: - ip4, ip6 = self.get_ips(node) - if wireless_link: - ip4_mask = WIRELESS_IP4_MASK - ip6_mask = WIRELESS_IP6_MASK - else: - ip4_mask = IP4_MASK - ip6_mask = IP6_MASK - iface_id = canvas_node.next_iface_id() - name = f"eth{iface_id}" - iface = Interface( - id=iface_id, - name=name, - ip4=ip4, - ip4_mask=ip4_mask, - ip6=ip6, - ip6_mask=ip6_mask, - ) - logger.info("create node(%s) interface(%s)", node.name, iface) - return iface diff --git a/daemon/core/gui/menubar.py b/daemon/core/gui/menubar.py index 16e57cb6..99a4e936 100644 --- a/daemon/core/gui/menubar.py +++ b/daemon/core/gui/menubar.py @@ -1,12 +1,11 @@ import logging +import os import tkinter as tk import webbrowser from functools import partial -from pathlib import Path from tkinter import filedialog, messagebox from typing import TYPE_CHECKING, Optional -from core.gui import images from core.gui.coreclient import CoreClient from core.gui.dialogs.about import AboutDialog from core.gui.dialogs.canvassizeandscale import SizeAndScaleDialog @@ -23,12 +22,11 @@ from core.gui.dialogs.servers import ServersDialog from core.gui.dialogs.sessionoptions import SessionOptionsDialog from core.gui.dialogs.sessions import SessionsDialog from core.gui.dialogs.throughput import ThroughputDialog -from core.gui.graph.manager import CanvasManager +from core.gui.graph.graph import CanvasGraph +from core.gui.nodeutils import ICON_SIZE from core.gui.observers import ObserversMenu from core.gui.task import ProgressTask -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -47,10 +45,9 @@ class Menubar(tk.Menu): super().__init__(app) self.app: "Application" = app self.core: CoreClient = app.core - self.manager: CanvasManager = app.manager + self.canvas: CanvasGraph = app.canvas self.recent_menu: Optional[tk.Menu] = None self.edit_menu: Optional[tk.Menu] = None - self.canvas_menu: Optional[tk.Menu] = None self.observers_menu: Optional[ObserversMenu] = None self.draw() @@ -78,7 +75,7 @@ class Menubar(tk.Menu): self.app.bind_all("", lambda e: self.click_new()) menu.add_command(label="Save", accelerator="Ctrl+S", command=self.click_save) self.app.bind_all("", self.click_save) - menu.add_command(label="Save As...", command=self.click_save_as) + menu.add_command(label="Save As...", command=self.click_save_xml) menu.add_command( label="Open...", command=self.click_open_xml, accelerator="Ctrl+O" ) @@ -86,7 +83,7 @@ class Menubar(tk.Menu): self.recent_menu = tk.Menu(menu) for i in self.app.guiconfig.recentfiles: self.recent_menu.add_command( - label=i, command=partial(self.open_recent_files, Path(i)) + label=i, command=partial(self.open_recent_files, i) ) menu.add_cascade(label="Recent Files", menu=self.recent_menu) menu.add_separator() @@ -109,7 +106,6 @@ class Menubar(tk.Menu): menu = tk.Menu(self) menu.add_command(label="Preferences", command=self.click_preferences) menu.add_command(label="Custom Nodes", command=self.click_custom_nodes) - menu.add_command(label="Show Hidden Nodes", command=self.click_show_hidden) menu.add_separator() menu.add_command(label="Undo", accelerator="Ctrl+Z", state=tk.DISABLED) menu.add_command(label="Redo", accelerator="Ctrl+Y", state=tk.DISABLED) @@ -122,6 +118,11 @@ class Menubar(tk.Menu): ) menu.add_command(label="Hide", accelerator="Ctrl+H", command=self.click_hide) self.add_cascade(label="Edit", menu=menu) + self.app.master.bind_all("", self.click_cut) + self.app.master.bind_all("", self.click_copy) + self.app.master.bind_all("", self.click_paste) + self.app.master.bind_all("", self.click_delete) + self.app.master.bind_all("", self.click_hide) self.edit_menu = menu def draw_canvas_menu(self) -> None: @@ -129,13 +130,9 @@ class Menubar(tk.Menu): Create canvas menu """ menu = tk.Menu(self) - menu.add_command(label="New", command=self.click_canvas_add) menu.add_command(label="Size / Scale", command=self.click_canvas_size_and_scale) - menu.add_separator() - menu.add_command(label="Delete", command=self.click_canvas_delete) menu.add_command(label="Wallpaper", command=self.click_canvas_wallpaper) self.add_cascade(label="Canvas", menu=menu) - self.canvas_menu = menu def draw_view_menu(self) -> None: """ @@ -150,52 +147,52 @@ class Menubar(tk.Menu): menu.add_checkbutton( label="Interface Names", command=self.click_edge_label_change, - variable=self.manager.show_iface_names, + variable=self.canvas.show_iface_names, ) menu.add_checkbutton( label="IPv4 Addresses", command=self.click_edge_label_change, - variable=self.manager.show_ip4s, + variable=self.canvas.show_ip4s, ) menu.add_checkbutton( label="IPv6 Addresses", command=self.click_edge_label_change, - variable=self.manager.show_ip6s, + variable=self.canvas.show_ip6s, ) menu.add_checkbutton( label="Node Labels", - command=self.manager.show_node_labels.click_handler, - variable=self.manager.show_node_labels, + command=self.canvas.show_node_labels.click_handler, + variable=self.canvas.show_node_labels, ) menu.add_checkbutton( label="Link Labels", - command=self.manager.show_link_labels.click_handler, - variable=self.manager.show_link_labels, + command=self.canvas.show_link_labels.click_handler, + variable=self.canvas.show_link_labels, ) menu.add_checkbutton( label="Links", - command=self.manager.show_links.click_handler, - variable=self.manager.show_links, + command=self.canvas.show_links.click_handler, + variable=self.canvas.show_links, ) menu.add_checkbutton( label="Loss Links", - command=self.manager.show_loss_links.click_handler, - variable=self.manager.show_loss_links, + command=self.canvas.show_loss_links.click_handler, + variable=self.canvas.show_loss_links, ) menu.add_checkbutton( label="Wireless Links", - command=self.manager.show_wireless.click_handler, - variable=self.manager.show_wireless, + command=self.canvas.show_wireless.click_handler, + variable=self.canvas.show_wireless, ) menu.add_checkbutton( label="Annotations", - command=self.manager.show_annotations.click_handler, - variable=self.manager.show_annotations, + command=self.canvas.show_annotations.click_handler, + variable=self.canvas.show_annotations, ) menu.add_checkbutton( label="Canvas Grid", - command=self.manager.show_grid.click_handler, - variable=self.manager.show_grid, + command=self.canvas.show_grid.click_handler, + variable=self.canvas.show_grid, ) self.add_cascade(label="View", menu=menu) @@ -235,11 +232,7 @@ class Menubar(tk.Menu): menu.add_command( label="Configure Throughput", command=self.click_config_throughput ) - menu.add_checkbutton( - label="Enable Throughput?", - command=self.click_throughput, - variable=self.core.show_throughputs, - ) + menu.add_checkbutton(label="Enable Throughput?", command=self.click_throughput) widget_menu.add_cascade(label="Throughput", menu=menu) def draw_widgets_menu(self) -> None: @@ -273,28 +266,27 @@ class Menubar(tk.Menu): menu.add_command(label="About", command=self.click_about) self.add_cascade(label="Help", menu=menu) - def open_recent_files(self, file_path: Path) -> None: - if file_path.is_file(): - logger.debug("Open recent file %s", file_path) - self.open_xml_task(file_path) + def open_recent_files(self, filename: str) -> None: + if os.path.isfile(filename): + logging.debug("Open recent file %s", filename) + self.open_xml_task(filename) else: - logger.warning("File does not exist %s", file_path) + logging.warning("File does not exist %s", filename) def update_recent_files(self) -> None: self.recent_menu.delete(0, tk.END) for i in self.app.guiconfig.recentfiles: self.recent_menu.add_command( - label=i, command=partial(self.open_recent_files, Path(i)) + label=i, command=partial(self.open_recent_files, i) ) - def click_save(self, _event: tk.Event = None) -> None: + def click_save(self, _event=None) -> None: if self.core.session.file: - if self.core.save_xml(): - self.add_recent_file_to_gui_config(self.core.session.file) + self.core.save_xml() else: - self.click_save_as() + self.click_save_xml() - def click_save_as(self, _event: tk.Event = None) -> None: + def click_save_xml(self, _event: tk.Event = None) -> None: init_dir = self.core.get_xml_dir() file_path = filedialog.asksaveasfilename( initialdir=init_dir, @@ -303,9 +295,8 @@ class Menubar(tk.Menu): defaultextension=".xml", ) if file_path: - file_path = Path(file_path) - if self.core.save_xml(file_path): - self.add_recent_file_to_gui_config(file_path) + self.add_recent_file_to_gui_config(file_path) + self.core.save_xml(file_path) def click_open_xml(self, _event: tk.Event = None) -> None: init_dir = self.core.get_xml_dir() @@ -315,10 +306,9 @@ class Menubar(tk.Menu): filetypes=(("XML Files", "*.xml"), ("All Files", "*")), ) if file_path: - file_path = Path(file_path) self.open_xml_task(file_path) - def open_xml_task(self, file_path: Path) -> None: + def open_xml_task(self, file_path: str) -> None: self.add_recent_file_to_gui_config(file_path) self.prompt_save_running_session() task = ProgressTask(self.app, "Open XML", self.core.open_xml, args=(file_path,)) @@ -328,23 +318,35 @@ class Menubar(tk.Menu): dialog = ExecutePythonDialog(self.app) dialog.show() - def add_recent_file_to_gui_config(self, file_path: Path) -> None: + def add_recent_file_to_gui_config(self, file_path) -> None: recent_files = self.app.guiconfig.recentfiles - file_path = str(file_path) - if file_path in recent_files: - recent_files.remove(file_path) - recent_files.insert(0, file_path) - if len(recent_files) > MAX_FILES: - recent_files.pop() + num_files = len(recent_files) + if num_files == 0: + recent_files.insert(0, file_path) + elif 0 < num_files <= MAX_FILES: + if file_path in recent_files: + recent_files.remove(file_path) + recent_files.insert(0, file_path) + else: + if num_files == MAX_FILES: + recent_files.pop() + recent_files.insert(0, file_path) + else: + logging.error("unexpected number of recent files") self.app.save_config() self.app.menubar.update_recent_files() - def set_state(self, is_runtime: bool) -> None: - state = tk.DISABLED if is_runtime else tk.NORMAL - for entry in {"Copy", "Paste", "Delete", "Cut"}: - self.edit_menu.entryconfigure(entry, state=state) - for entry in {"Delete"}: - self.canvas_menu.entryconfigure(entry, state=state) + def change_menubar_item_state(self, is_runtime: bool) -> None: + labels = {"Copy", "Paste", "Delete", "Cut"} + for i in range(self.edit_menu.index(tk.END) + 1): + try: + label = self.edit_menu.entrycget(i, "label") + if label not in labels: + continue + state = tk.DISABLED if is_runtime else tk.NORMAL + self.edit_menu.entryconfig(i, state=state) + except tk.TclError: + pass def prompt_save_running_session(self, quit_app: bool = False) -> None: """ @@ -372,12 +374,6 @@ class Menubar(tk.Menu): dialog = PreferencesDialog(self.app) dialog.show() - def click_canvas_add(self) -> None: - self.manager.add_canvas() - - def click_canvas_delete(self) -> None: - self.manager.delete_canvas() - def click_canvas_size_and_scale(self) -> None: dialog = SizeAndScaleDialog(self.app) dialog.show() @@ -397,7 +393,7 @@ class Menubar(tk.Menu): dialog.show() def click_throughput(self) -> None: - if self.core.show_throughputs.get(): + if not self.core.handling_throughputs: self.core.enable_throughputs() else: self.core.cancel_throughputs() @@ -407,48 +403,39 @@ class Menubar(tk.Menu): dialog.show() def click_copy(self, _event: tk.Event = None) -> None: - canvas = self.manager.current() - canvas.copy_selected() + self.canvas.copy() - def click_paste(self, event: tk.Event = None) -> None: - canvas = self.manager.current() - canvas.paste_selected(event) + def click_paste(self, _event: tk.Event = None) -> None: + self.canvas.paste() - def click_delete(self, event: tk.Event = None) -> None: - canvas = self.manager.current() - canvas.delete_selected(event) + def click_delete(self, _event: tk.Event = None) -> None: + self.canvas.delete_selected_objects() - def click_hide(self, event: tk.Event = None) -> None: - canvas = self.manager.current() - canvas.hide_selected(event) + def click_cut(self, _event: tk.Event = None) -> None: + self.canvas.copy() + self.canvas.delete_selected_objects() - def click_cut(self, event: tk.Event = None) -> None: - canvas = self.manager.current() - canvas.copy_selected(event) - canvas.delete_selected(event) - - def click_show_hidden(self, _event: tk.Event = None) -> None: - for canvas in self.manager.all(): - canvas.show_hidden() + def click_hide(self, _event: tk.Event = None) -> None: + self.canvas.hide_selected_objects() def click_session_options(self) -> None: - logger.debug("Click options") + logging.debug("Click options") dialog = SessionOptionsDialog(self.app) if not dialog.has_error: dialog.show() def click_sessions(self) -> None: - logger.debug("Click change sessions") + logging.debug("Click change sessions") dialog = SessionsDialog(self.app) dialog.show() def click_hooks(self) -> None: - logger.debug("Click hooks") + logging.debug("Click hooks") dialog = HooksDialog(self.app) dialog.show() def click_servers(self) -> None: - logger.debug("Click emulation servers") + logging.debug("Click emulation servers") dialog = ServersDialog(self.app) dialog.show() @@ -457,15 +444,14 @@ class Menubar(tk.Menu): dialog.show() def click_autogrid(self) -> None: - width, height = self.manager.current().current_dimensions - padding = (images.NODE_SIZE / 2) + 10 - layout_size = padding + images.NODE_SIZE + width, height = self.canvas.current_dimensions + padding = (ICON_SIZE / 2) + 10 + layout_size = padding + ICON_SIZE col_count = width // layout_size - logger.info( + logging.info( "auto grid layout: dimension(%s, %s) col(%s)", width, height, col_count ) - canvas = self.manager.current() - for i, node in enumerate(canvas.nodes.values()): + for i, node in enumerate(self.canvas.nodes.values()): col = i % col_count row = i // col_count x = (col * layout_size) + padding @@ -479,7 +465,7 @@ class Menubar(tk.Menu): self.app.hide_info() def click_edge_label_change(self) -> None: - for edge in self.manager.edges.values(): + for edge in self.canvas.edges.values(): edge.draw_labels() def click_mac_config(self) -> None: diff --git a/daemon/core/gui/nodeutils.py b/daemon/core/gui/nodeutils.py index 0b3e3d9a..5ee3469e 100644 --- a/daemon/core/gui/nodeutils.py +++ b/daemon/core/gui/nodeutils.py @@ -1,155 +1,14 @@ import logging -from typing import TYPE_CHECKING, Optional +from typing import List, Optional, Set from PIL.ImageTk import PhotoImage from core.api.grpc.wrappers import Node, NodeType -from core.gui import images from core.gui.appconfig import CustomNode, GuiConfig -from core.gui.images import ImageEnum +from core.gui.images import ImageEnum, Images, TypeToImage -logger = logging.getLogger(__name__) - -if TYPE_CHECKING: - from core.gui.app import Application - -NODES: list["NodeDraw"] = [] -NETWORK_NODES: list["NodeDraw"] = [] -NODE_ICONS = {} -CONTAINER_NODES: set[NodeType] = { - NodeType.DEFAULT, - NodeType.DOCKER, - NodeType.LXC, - NodeType.PODMAN, -} -IMAGE_NODES: set[NodeType] = {NodeType.DOCKER, NodeType.LXC, NodeType.PODMAN} -WIRELESS_NODES: set[NodeType] = { - NodeType.WIRELESS_LAN, - NodeType.EMANE, - NodeType.WIRELESS, -} -RJ45_NODES: set[NodeType] = {NodeType.RJ45} -BRIDGE_NODES: set[NodeType] = {NodeType.HUB, NodeType.SWITCH} -IGNORE_NODES: set[NodeType] = {NodeType.CONTROL_NET} -MOBILITY_NODES: set[NodeType] = {NodeType.WIRELESS_LAN, NodeType.EMANE} -NODE_MODELS: set[str] = {"router", "PC", "mdr", "prouter"} -ROUTER_NODES: set[str] = {"router", "mdr"} -ANTENNA_ICON: Optional[PhotoImage] = None - - -def setup() -> None: - global ANTENNA_ICON - nodes = [ - (ImageEnum.PC, NodeType.DEFAULT, "PC", "PC"), - (ImageEnum.MDR, NodeType.DEFAULT, "MDR", "mdr"), - (ImageEnum.ROUTER, NodeType.DEFAULT, "Router", "router"), - (ImageEnum.PROUTER, NodeType.DEFAULT, "PRouter", "prouter"), - (ImageEnum.DOCKER, NodeType.DOCKER, "Docker", None), - (ImageEnum.LXC, NodeType.LXC, "LXC", None), - (ImageEnum.PODMAN, NodeType.PODMAN, "Podman", None), - ] - for image_enum, node_type, label, model in nodes: - node_draw = NodeDraw.from_setup(image_enum, node_type, label, model) - NODES.append(node_draw) - NODE_ICONS[(node_type, model)] = node_draw.image - network_nodes = [ - (ImageEnum.HUB, NodeType.HUB, "Hub"), - (ImageEnum.SWITCH, NodeType.SWITCH, "Switch"), - (ImageEnum.WLAN, NodeType.WIRELESS_LAN, "WLAN"), - (ImageEnum.WIRELESS, NodeType.WIRELESS, "Wireless"), - (ImageEnum.EMANE, NodeType.EMANE, "EMANE"), - (ImageEnum.RJ45, NodeType.RJ45, "RJ45"), - (ImageEnum.TUNNEL, NodeType.TUNNEL, "Tunnel"), - ] - for image_enum, node_type, label in network_nodes: - node_draw = NodeDraw.from_setup(image_enum, node_type, label) - NETWORK_NODES.append(node_draw) - NODE_ICONS[(node_type, None)] = node_draw.image - ANTENNA_ICON = images.from_enum(ImageEnum.ANTENNA, width=images.ANTENNA_SIZE) - - -def is_bridge(node: Node) -> bool: - return node.type in BRIDGE_NODES - - -def is_mobility(node: Node) -> bool: - return node.type in MOBILITY_NODES - - -def is_router(node: Node) -> bool: - return is_model(node) and node.model in ROUTER_NODES - - -def should_ignore(node: Node) -> bool: - return node.type in IGNORE_NODES - - -def is_container(node: Node) -> bool: - return node.type in CONTAINER_NODES - - -def is_model(node: Node) -> bool: - return node.type == NodeType.DEFAULT - - -def has_image(node_type: NodeType) -> bool: - return node_type in IMAGE_NODES - - -def is_wireless(node: Node) -> bool: - return node.type in WIRELESS_NODES - - -def is_rj45(node: Node) -> bool: - return node.type in RJ45_NODES - - -def is_custom(node: Node) -> bool: - return is_model(node) and node.model not in NODE_MODELS - - -def is_iface_node(node: Node) -> bool: - return is_container(node) or is_bridge(node) - - -def get_custom_services(gui_config: GuiConfig, name: str) -> list[str]: - for custom_node in gui_config.nodes: - if custom_node.name == name: - return custom_node.services - return [] - - -def _get_custom_file(config: GuiConfig, name: str) -> Optional[str]: - for custom_node in config.nodes: - if custom_node.name == name: - return custom_node.image - return None - - -def get_icon(node: Node, app: "Application") -> PhotoImage: - scale = app.app_scale - image = None - # node icon was overridden with a specific value - if node.icon: - try: - image = images.from_file(node.icon, width=images.NODE_SIZE, scale=scale) - except OSError: - logger.error("invalid icon: %s", node.icon) - # custom node - elif is_custom(node): - image_file = _get_custom_file(app.guiconfig, node.model) - logger.info("custom node file: %s", image_file) - if image_file: - image = images.from_file(image_file, width=images.NODE_SIZE, scale=scale) - # built in node - else: - image = images.from_node(node, scale=scale) - # default image, if everything above fails - if not image: - image = images.from_enum( - ImageEnum.EDITNODE, width=images.NODE_SIZE, scale=scale - ) - return image +ICON_SIZE: int = 48 +ANTENNA_SIZE: int = 32 class NodeDraw: @@ -160,7 +19,7 @@ class NodeDraw: self.image_file: Optional[str] = None self.node_type: Optional[NodeType] = None self.model: Optional[str] = None - self.services: set[str] = set() + self.services: Set[str] = set() self.label: Optional[str] = None @classmethod @@ -174,7 +33,7 @@ class NodeDraw: ) -> "NodeDraw": node_draw = NodeDraw() node_draw.image_enum = image_enum - node_draw.image = images.from_enum(image_enum, width=images.NODE_SIZE) + node_draw.image = Images.get(image_enum, ICON_SIZE) node_draw.node_type = node_type node_draw.label = label node_draw.model = model @@ -186,10 +45,135 @@ class NodeDraw: node_draw = NodeDraw() node_draw.custom = True node_draw.image_file = custom_node.image - node_draw.image = images.from_file(custom_node.image, width=images.NODE_SIZE) + node_draw.image = Images.get_custom(custom_node.image, ICON_SIZE) node_draw.node_type = NodeType.DEFAULT - node_draw.services = set(custom_node.services) + node_draw.services = custom_node.services node_draw.label = custom_node.name node_draw.model = custom_node.name node_draw.tooltip = custom_node.name return node_draw + + +class NodeUtils: + NODES: List[NodeDraw] = [] + NETWORK_NODES: List[NodeDraw] = [] + NODE_ICONS = {} + CONTAINER_NODES: Set[NodeType] = {NodeType.DEFAULT, NodeType.DOCKER, NodeType.LXC} + IMAGE_NODES: Set[NodeType] = {NodeType.DOCKER, NodeType.LXC} + WIRELESS_NODES: Set[NodeType] = {NodeType.WIRELESS_LAN, NodeType.EMANE} + RJ45_NODES: Set[NodeType] = {NodeType.RJ45} + BRIDGE_NODES: Set[NodeType] = {NodeType.HUB, NodeType.SWITCH} + IGNORE_NODES: Set[NodeType] = {NodeType.CONTROL_NET} + MOBILITY_NODES: Set[NodeType] = {NodeType.WIRELESS_LAN, NodeType.EMANE} + NODE_MODELS: Set[str] = {"router", "host", "PC", "mdr", "prouter"} + ROUTER_NODES: Set[str] = {"router", "mdr"} + ANTENNA_ICON: PhotoImage = None + + @classmethod + def is_bridge_node(cls, node: Node) -> bool: + return node.type in cls.BRIDGE_NODES + + @classmethod + def is_mobility(cls, node: Node) -> bool: + return node.type in cls.MOBILITY_NODES + + @classmethod + def is_router_node(cls, node: Node) -> bool: + return cls.is_model_node(node.type) and node.model in cls.ROUTER_NODES + + @classmethod + def is_ignore_node(cls, node_type: NodeType) -> bool: + return node_type in cls.IGNORE_NODES + + @classmethod + def is_container_node(cls, node_type: NodeType) -> bool: + return node_type in cls.CONTAINER_NODES + + @classmethod + def is_model_node(cls, node_type: NodeType) -> bool: + return node_type == NodeType.DEFAULT + + @classmethod + def is_image_node(cls, node_type: NodeType) -> bool: + return node_type in cls.IMAGE_NODES + + @classmethod + def is_wireless_node(cls, node_type: NodeType) -> bool: + return node_type in cls.WIRELESS_NODES + + @classmethod + def is_rj45_node(cls, node_type: NodeType) -> bool: + return node_type in cls.RJ45_NODES + + @classmethod + def node_icon( + cls, node_type: NodeType, model: str, gui_config: GuiConfig, scale: float = 1.0 + ) -> PhotoImage: + + image_enum = TypeToImage.get(node_type, model) + if image_enum: + return Images.get(image_enum, int(ICON_SIZE * scale)) + else: + image_stem = cls.get_image_file(gui_config, model) + if image_stem: + return Images.get_with_image_file(image_stem, int(ICON_SIZE * scale)) + + @classmethod + def node_image( + cls, core_node: Node, gui_config: GuiConfig, scale: float = 1.0 + ) -> PhotoImage: + image = cls.node_icon(core_node.type, core_node.model, gui_config, scale) + if core_node.icon: + try: + image = Images.create(core_node.icon, int(ICON_SIZE * scale)) + except OSError: + logging.error("invalid icon: %s", core_node.icon) + return image + + @classmethod + def is_custom(cls, node_type: NodeType, model: str) -> bool: + return node_type == NodeType.DEFAULT and model not in cls.NODE_MODELS + + @classmethod + def get_custom_node_services(cls, gui_config: GuiConfig, name: str) -> List[str]: + for custom_node in gui_config.nodes: + if custom_node.name == name: + return custom_node.services + return [] + + @classmethod + def get_image_file(cls, gui_config: GuiConfig, name: str) -> Optional[str]: + for custom_node in gui_config.nodes: + if custom_node.name == name: + return custom_node.image + return None + + @classmethod + def setup(cls) -> None: + nodes = [ + (ImageEnum.ROUTER, NodeType.DEFAULT, "Router", "router"), + (ImageEnum.HOST, NodeType.DEFAULT, "Host", "host"), + (ImageEnum.PC, NodeType.DEFAULT, "PC", "PC"), + (ImageEnum.MDR, NodeType.DEFAULT, "MDR", "mdr"), + (ImageEnum.PROUTER, NodeType.DEFAULT, "PRouter", "prouter"), + (ImageEnum.DOCKER, NodeType.DOCKER, "Docker", None), + (ImageEnum.LXC, NodeType.LXC, "LXC", None), + ] + for image_enum, node_type, label, model in nodes: + node_draw = NodeDraw.from_setup(image_enum, node_type, label, model) + cls.NODES.append(node_draw) + cls.NODE_ICONS[(node_type, model)] = node_draw.image + + network_nodes = [ + (ImageEnum.HUB, NodeType.HUB, "Hub"), + (ImageEnum.SWITCH, NodeType.SWITCH, "Switch"), + (ImageEnum.WLAN, NodeType.WIRELESS_LAN, "WLAN"), + (ImageEnum.EMANE, NodeType.EMANE, "EMANE"), + (ImageEnum.RJ45, NodeType.RJ45, "RJ45"), + (ImageEnum.TUNNEL, NodeType.TUNNEL, "Tunnel"), + ] + for image_enum, node_type, label in network_nodes: + node_draw = NodeDraw.from_setup(image_enum, node_type, label) + cls.NETWORK_NODES.append(node_draw) + cls.NODE_ICONS[(node_type, None)] = node_draw.image + cls.ANTENNA_ICON = Images.get(ImageEnum.ANTENNA, ANTENNA_SIZE) diff --git a/daemon/core/gui/observers.py b/daemon/core/gui/observers.py index 8cf026bd..7879494b 100644 --- a/daemon/core/gui/observers.py +++ b/daemon/core/gui/observers.py @@ -1,13 +1,13 @@ import tkinter as tk from functools import partial -from typing import TYPE_CHECKING +from typing import TYPE_CHECKING, Dict from core.gui.dialogs.observers import ObserverDialog if TYPE_CHECKING: from core.gui.app import Application -OBSERVERS: dict[str, str] = { +OBSERVERS: Dict[str, str] = { "List Processes": "ps", "Show Interfaces": "ip address", "IPV4 Routes": "ip -4 route", diff --git a/daemon/core/gui/statusbar.py b/daemon/core/gui/statusbar.py index a4967cd6..25f5f972 100644 --- a/daemon/core/gui/statusbar.py +++ b/daemon/core/gui/statusbar.py @@ -3,7 +3,7 @@ status bar """ import tkinter as tk from tkinter import ttk -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, List, Optional from core.api.grpc.wrappers import ExceptionEvent, ExceptionLevel from core.gui.dialogs.alerts import AlertsDialog @@ -24,7 +24,7 @@ class StatusBar(ttk.Frame): self.alerts_button: Optional[ttk.Button] = None self.alert_style = Styles.no_alert self.running: bool = False - self.core_alarms: list[ExceptionEvent] = [] + self.core_alarms: List[ExceptionEvent] = [] self.draw() def draw(self) -> None: @@ -48,6 +48,7 @@ class StatusBar(ttk.Frame): self.zoom = ttk.Label(self, anchor=tk.CENTER, borderwidth=1, relief=tk.RIDGE) self.zoom.grid(row=0, column=1, sticky=tk.EW) + self.set_zoom(self.app.canvas.ratio) self.cpu_label = ttk.Label( self, anchor=tk.CENTER, borderwidth=1, relief=tk.RIDGE diff --git a/daemon/core/gui/task.py b/daemon/core/gui/task.py index 6bbeb70f..02148f5a 100644 --- a/daemon/core/gui/task.py +++ b/daemon/core/gui/task.py @@ -2,9 +2,7 @@ import logging import threading import time import tkinter as tk -from typing import TYPE_CHECKING, Any, Callable, Optional - -logger = logging.getLogger(__name__) +from typing import TYPE_CHECKING, Any, Callable, Optional, Tuple if TYPE_CHECKING: from core.gui.app import Application @@ -17,7 +15,7 @@ class ProgressTask: title: str, task: Callable, callback: Callable = None, - args: tuple[Any] = None, + args: Tuple[Any] = None, ): self.app: "Application" = app self.title: str = title @@ -25,7 +23,7 @@ class ProgressTask: self.callback: Callable = callback if args is None: args = () - self.args: tuple[Any] = args + self.args: Tuple[Any] = args self.time: Optional[float] = None def start(self) -> None: @@ -45,7 +43,7 @@ class ProgressTask: if self.callback: self.app.after(0, self.callback, *values) except Exception as e: - logger.exception("progress task exception") + logging.exception("progress task exception") self.app.show_exception("Task Error", e) finally: self.app.after(0, self.complete) diff --git a/daemon/core/gui/themes.py b/daemon/core/gui/themes.py index cb6280e5..45b109f0 100644 --- a/daemon/core/gui/themes.py +++ b/daemon/core/gui/themes.py @@ -1,9 +1,10 @@ import tkinter as tk from tkinter import font, ttk +from typing import Dict, Tuple THEME_DARK: str = "black" -PADX: tuple[int, int] = (0, 5) -PADY: tuple[int, int] = (0, 5) +PADX: Tuple[int, int] = (0, 5) +PADY: Tuple[int, int] = (0, 5) FRAME_PAD: int = 5 DIALOG_PAD: int = 5 @@ -200,7 +201,7 @@ def theme_change(event: tk.Event) -> None: _alert_style(style, Styles.red_alert, "red") -def scale_fonts(fonts_size: dict[str, int], scale: float) -> None: +def scale_fonts(fonts_size: Dict[str, int], scale: float) -> None: for name in font.names(): f = font.nametofont(name) if name in fonts_size: diff --git a/daemon/core/gui/toolbar.py b/daemon/core/gui/toolbar.py index 7c32c0af..1f5589ba 100644 --- a/daemon/core/gui/toolbar.py +++ b/daemon/core/gui/toolbar.py @@ -3,25 +3,22 @@ import tkinter as tk from enum import Enum from functools import partial from tkinter import ttk -from typing import TYPE_CHECKING, Callable, Optional +from typing import TYPE_CHECKING, Callable, List, Optional from PIL.ImageTk import PhotoImage -from core.gui import nodeutils as nutils from core.gui.dialogs.colorpicker import ColorPickerDialog from core.gui.dialogs.runtool import RunToolDialog from core.gui.graph import tags from core.gui.graph.enums import GraphMode from core.gui.graph.shapeutils import ShapeType, is_marker from core.gui.images import ImageEnum -from core.gui.nodeutils import NodeDraw +from core.gui.nodeutils import NodeDraw, NodeUtils from core.gui.observers import ObserversMenu from core.gui.task import ProgressTask from core.gui.themes import Styles from core.gui.tooltip import Tooltip -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application @@ -60,11 +57,11 @@ class PickerFrame(ttk.Frame): image_file: str = None, ) -> None: if image_enum: - bar_image = self.app.get_enum_icon(image_enum, width=TOOLBAR_SIZE) - image = self.app.get_enum_icon(image_enum, width=PICKER_SIZE) + bar_image = self.app.get_icon(image_enum, TOOLBAR_SIZE) + image = self.app.get_icon(image_enum, PICKER_SIZE) else: - bar_image = self.app.get_file_icon(image_file, width=TOOLBAR_SIZE) - image = self.app.get_file_icon(image_file, width=PICKER_SIZE) + bar_image = self.app.get_custom_icon(image_file, TOOLBAR_SIZE) + image = self.app.get_custom_icon(image_file, PICKER_SIZE) button = ttk.Button( self, image=image, text=label, compound=tk.TOP, style=Styles.picker_button ) @@ -90,12 +87,12 @@ class ButtonBar(ttk.Frame): def __init__(self, master: tk.Widget, app: "Application") -> None: super().__init__(master) self.app: "Application" = app - self.radio_buttons: list[ttk.Button] = [] + self.radio_buttons: List[ttk.Button] = [] def create_button( self, image_enum: ImageEnum, func: Callable, tooltip: str, radio: bool = False ) -> ttk.Button: - image = self.app.get_enum_icon(image_enum, width=TOOLBAR_SIZE) + image = self.app.get_icon(image_enum, TOOLBAR_SIZE) button = ttk.Button(self, image=image, command=func) button.image = image button.grid(sticky=tk.EW) @@ -124,7 +121,7 @@ class MarkerFrame(ttk.Frame): def draw(self) -> None: self.columnconfigure(0, weight=1) - image = self.app.get_enum_icon(ImageEnum.DELETE, width=16) + image = self.app.get_icon(ImageEnum.DELETE, 16) button = ttk.Button(self, image=image, width=2, command=self.click_clear) button.image = image button.grid(sticky=tk.EW, pady=self.PAD) @@ -147,8 +144,7 @@ class MarkerFrame(ttk.Frame): Tooltip(self.color_frame, "Marker Color") def click_clear(self) -> None: - canvas = self.app.manager.current() - canvas.delete(tags.MARKER) + self.app.canvas.delete(tags.MARKER) def click_color(self, _event: tk.Event) -> None: dialog = ColorPickerDialog(self.app, self.app, self.color) @@ -193,8 +189,8 @@ class Toolbar(ttk.Frame): # these variables help keep track of what images being drawn so that scaling # is possible since PhotoImage does not have resize method - self.current_node: NodeDraw = nutils.NODES[0] - self.current_network: NodeDraw = nutils.NETWORK_NODES[0] + self.current_node: NodeDraw = NodeUtils.NODES[0] + self.current_network: NodeDraw = NodeUtils.NETWORK_NODES[0] self.current_annotation: ShapeType = ShapeType.MARKER self.annotation_enum: ImageEnum = ImageEnum.MARKER @@ -261,12 +257,12 @@ class Toolbar(ttk.Frame): def draw_node_picker(self) -> None: self.hide_marker() - self.app.manager.mode = GraphMode.NODE - self.app.manager.node_draw = self.current_node + self.app.canvas.mode = GraphMode.NODE + self.app.canvas.node_draw = self.current_node self.design_frame.select_radio(self.node_button) self.picker = PickerFrame(self.app, self.node_button) # draw default nodes - for node_draw in nutils.NODES: + for node_draw in NodeUtils.NODES: func = partial( self.update_button, self.node_button, node_draw, NodeTypeEnum.NODE ) @@ -282,12 +278,12 @@ class Toolbar(ttk.Frame): def click_selection(self) -> None: self.design_frame.select_radio(self.select_button) - self.app.manager.mode = GraphMode.SELECT + self.app.canvas.mode = GraphMode.SELECT self.hide_marker() def click_runtime_selection(self) -> None: self.runtime_frame.select_radio(self.runtime_select_button) - self.app.manager.mode = GraphMode.SELECT + self.app.canvas.mode = GraphMode.SELECT self.hide_marker() def click_start(self) -> None: @@ -295,22 +291,24 @@ class Toolbar(ttk.Frame): Start session handler redraw buttons, send node and link messages to grpc server. """ - self.app.menubar.set_state(is_runtime=True) - self.app.manager.mode = GraphMode.SELECT + self.app.menubar.change_menubar_item_state(is_runtime=True) + self.app.canvas.mode = GraphMode.SELECT enable_buttons(self.design_frame, enabled=False) task = ProgressTask( self.app, "Start", self.app.core.start_session, self.start_callback ) task.start() - def start_callback(self, result: bool, exceptions: list[str]) -> None: - self.set_runtime() - self.app.core.show_mobility_players() - if not result and exceptions: - message = "\n".join(exceptions) - self.app.show_exception_data( - "Start Exception", "Session failed to start", message - ) + def start_callback(self, result: bool, exceptions: List[str]) -> None: + if result: + self.set_runtime() + self.app.core.set_metadata() + self.app.core.show_mobility_players() + else: + enable_buttons(self.design_frame, enabled=True) + if exceptions: + message = "\n".join(exceptions) + self.app.show_error("Start Session Error", message) def set_runtime(self) -> None: enable_buttons(self.runtime_frame, enabled=True) @@ -326,7 +324,7 @@ class Toolbar(ttk.Frame): def click_link(self) -> None: self.design_frame.select_radio(self.link_button) - self.app.manager.mode = GraphMode.EDGE + self.app.canvas.mode = GraphMode.EDGE self.hide_marker() def update_button( @@ -336,10 +334,10 @@ class Toolbar(ttk.Frame): type_enum: NodeTypeEnum, image: PhotoImage, ) -> None: - logger.debug("update button(%s): %s", button, node_draw) + logging.debug("update button(%s): %s", button, node_draw) button.configure(image=image) button.image = image - self.app.manager.node_draw = node_draw + self.app.canvas.node_draw = node_draw if type_enum == NodeTypeEnum.NODE: self.current_node = node_draw elif type_enum == NodeTypeEnum.NETWORK: @@ -350,11 +348,11 @@ class Toolbar(ttk.Frame): Draw the options for link-layer button. """ self.hide_marker() - self.app.manager.mode = GraphMode.NODE - self.app.manager.node_draw = self.current_network + self.app.canvas.mode = GraphMode.NODE + self.app.canvas.node_draw = self.current_network self.design_frame.select_radio(self.network_button) self.picker = PickerFrame(self.app, self.network_button) - for node_draw in nutils.NETWORK_NODES: + for node_draw in NodeUtils.NETWORK_NODES: func = partial( self.update_button, self.network_button, node_draw, NodeTypeEnum.NETWORK ) @@ -366,8 +364,8 @@ class Toolbar(ttk.Frame): Draw the options for marker button. """ self.design_frame.select_radio(self.annotation_button) - self.app.manager.mode = GraphMode.ANNOTATION - self.app.manager.annotation_type = self.current_annotation + self.app.canvas.mode = GraphMode.ANNOTATION + self.app.canvas.annotation_type = self.current_annotation if is_marker(self.current_annotation): self.show_marker() self.picker = PickerFrame(self.app, self.annotation_button) @@ -384,7 +382,7 @@ class Toolbar(ttk.Frame): self.picker.show() def create_observe_button(self) -> None: - image = self.app.get_enum_icon(ImageEnum.OBSERVE, width=TOOLBAR_SIZE) + image = self.app.get_icon(ImageEnum.OBSERVE, TOOLBAR_SIZE) menu_button = ttk.Menubutton( self.runtime_frame, image=image, direction=tk.RIGHT ) @@ -397,8 +395,8 @@ class Toolbar(ttk.Frame): """ redraw buttons on the toolbar, send node and link messages to grpc server """ - logger.info("clicked stop button") - self.app.menubar.set_state(is_runtime=False) + logging.info("clicked stop button") + self.app.menubar.change_menubar_item_state(is_runtime=False) self.app.core.close_mobility_players() enable_buttons(self.runtime_frame, enabled=False) task = ProgressTask( @@ -408,15 +406,15 @@ class Toolbar(ttk.Frame): def stop_callback(self, result: bool) -> None: self.set_design() - self.app.manager.stopped_session() + self.app.canvas.stopped_session() def update_annotation( self, shape_type: ShapeType, image_enum: ImageEnum, image: PhotoImage ) -> None: - logger.debug("clicked annotation") + logging.debug("clicked annotation") self.annotation_button.configure(image=image) self.annotation_button.image = image - self.app.manager.annotation_type = shape_type + self.app.canvas.annotation_type = shape_type self.current_annotation = shape_type self.annotation_enum = image_enum if is_marker(shape_type): @@ -431,14 +429,14 @@ class Toolbar(ttk.Frame): self.marker_frame.grid() def click_run_button(self) -> None: - logger.debug("Click on RUN button") + logging.debug("Click on RUN button") dialog = RunToolDialog(self.app) dialog.show() def click_marker_button(self) -> None: self.runtime_frame.select_radio(self.runtime_marker_button) - self.app.manager.mode = GraphMode.ANNOTATION - self.app.manager.annotation_type = ShapeType.MARKER + self.app.canvas.mode = GraphMode.ANNOTATION + self.app.canvas.annotation_type = ShapeType.MARKER self.show_marker() def scale_button( @@ -446,9 +444,9 @@ class Toolbar(ttk.Frame): ) -> None: image = None if image_enum: - image = self.app.get_enum_icon(image_enum, width=TOOLBAR_SIZE) + image = self.app.get_icon(image_enum, TOOLBAR_SIZE) elif image_file: - image = self.app.get_file_icon(image_file, width=TOOLBAR_SIZE) + image = self.app.get_custom_icon(image_file, TOOLBAR_SIZE) if image: button.config(image=image) button.image = image diff --git a/daemon/core/gui/tooltip.py b/daemon/core/gui/tooltip.py index 6d84ac75..84a3178f 100644 --- a/daemon/core/gui/tooltip.py +++ b/daemon/core/gui/tooltip.py @@ -5,7 +5,7 @@ from typing import Optional from core.gui.themes import Styles -class Tooltip: +class Tooltip(object): """ Create tool tip for a given widget """ @@ -42,7 +42,7 @@ class Tooltip: y += self.widget.winfo_rooty() + 32 self.tw = tk.Toplevel(self.widget) self.tw.wm_overrideredirect(True) - self.tw.wm_geometry(f"+{x:d}+{y:d}") + self.tw.wm_geometry("+%d+%d" % (x, y)) self.tw.rowconfigure(0, weight=1) self.tw.columnconfigure(0, weight=1) frame = ttk.Frame(self.tw, style=Styles.tooltip_frame, padding=3) diff --git a/daemon/core/gui/validation.py b/daemon/core/gui/validation.py index 61500e84..2360ab0b 100644 --- a/daemon/core/gui/validation.py +++ b/daemon/core/gui/validation.py @@ -3,9 +3,8 @@ input validation """ import re import tkinter as tk -from re import Pattern from tkinter import ttk -from typing import Any, Optional +from typing import Any, Optional, Pattern SMALLEST_SCALE: float = 0.5 LARGEST_SCALE: float = 5.0 diff --git a/daemon/core/gui/widgets.py b/daemon/core/gui/widgets.py index 902f1132..004aa7b7 100644 --- a/daemon/core/gui/widgets.py +++ b/daemon/core/gui/widgets.py @@ -3,19 +3,17 @@ import tkinter as tk from functools import partial from pathlib import Path from tkinter import filedialog, font, ttk -from typing import TYPE_CHECKING, Any, Callable +from typing import TYPE_CHECKING, Any, Callable, Dict, Set, Type from core.api.grpc.wrappers import ConfigOption, ConfigOptionType from core.gui import appconfig, themes, validation from core.gui.dialogs.dialog import Dialog from core.gui.themes import FRAME_PAD, PADX, PADY -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.gui.app import Application -INT_TYPES: set[ConfigOptionType] = { +INT_TYPES: Set[ConfigOptionType] = { ConfigOptionType.UINT8, ConfigOptionType.UINT16, ConfigOptionType.UINT32, @@ -40,7 +38,7 @@ class FrameScroll(ttk.Frame): self, master: tk.Widget, app: "Application", - _cls: type[ttk.Frame] = ttk.Frame, + _cls: Type[ttk.Frame] = ttk.Frame, **kw: Any ) -> None: super().__init__(master, **kw) @@ -86,14 +84,14 @@ class ConfigFrame(ttk.Notebook): self, master: tk.Widget, app: "Application", - config: dict[str, ConfigOption], + config: Dict[str, ConfigOption], enabled: bool = True, **kw: Any ) -> None: super().__init__(master, **kw) self.app: "Application" = app - self.config: dict[str, ConfigOption] = config - self.values: dict[str, tk.StringVar] = {} + self.config: Dict[str, ConfigOption] = config + self.values: Dict[str, tk.StringVar] = {} self.enabled: bool = enabled def draw_config(self) -> None: @@ -163,10 +161,10 @@ class ConfigFrame(ttk.Notebook): ) entry.grid(row=index, column=1, sticky=tk.EW) else: - logger.error("unhandled config option type: %s", option.type) + logging.error("unhandled config option type: %s", option.type) self.values[option.name] = value - def parse_config(self) -> dict[str, str]: + def parse_config(self) -> Dict[str, str]: for key in self.config: option = self.config[key] value = self.values[key] @@ -180,7 +178,7 @@ class ConfigFrame(ttk.Notebook): option.value = config_value return {x: self.config[x].value for x in self.config} - def set_values(self, config: dict[str, str]) -> None: + def set_values(self, config: Dict[str, str]) -> None: for name, data in config.items(): option = self.config[name] value = self.values[name] @@ -257,13 +255,6 @@ class CodeText(ttk.Frame): yscrollbar.grid(row=0, column=1, sticky=tk.NS) self.text.configure(yscrollcommand=yscrollbar.set) - def get_text(self) -> str: - return self.text.get(1.0, tk.END) - - def set_text(self, text: str) -> None: - self.text.delete(1.0, tk.END) - self.text.insert(tk.END, text.rstrip()) - class Spinbox(ttk.Entry): def __init__(self, master: tk.BaseWidget = None, **kwargs: Any) -> None: diff --git a/daemon/core/location/event.py b/daemon/core/location/event.py index 9b300241..7f8a33a1 100644 --- a/daemon/core/location/event.py +++ b/daemon/core/location/event.py @@ -6,7 +6,7 @@ import heapq import threading import time from functools import total_ordering -from typing import Any, Callable, Optional +from typing import Any, Callable, Dict, List, Optional, Tuple class Timer(threading.Thread): @@ -19,8 +19,8 @@ class Timer(threading.Thread): self, interval: float, func: Callable[..., None], - args: tuple[Any] = None, - kwargs: dict[Any, Any] = None, + args: Tuple[Any] = None, + kwargs: Dict[Any, Any] = None, ) -> None: """ Create a Timer instance. @@ -38,11 +38,11 @@ class Timer(threading.Thread): # validate arguments were provided if args is None: args = () - self.args: tuple[Any] = args + self.args: Tuple[Any] = args # validate keyword arguments were provided if kwargs is None: kwargs = {} - self.kwargs: dict[Any, Any] = kwargs + self.kwargs: Dict[Any, Any] = kwargs def cancel(self) -> bool: """ @@ -96,8 +96,8 @@ class Event: self.eventnum: int = eventnum self.time: float = event_time self.func: Callable[..., None] = func - self.args: tuple[Any] = args - self.kwds: dict[Any, Any] = kwds + self.args: Tuple[Any] = args + self.kwds: Dict[Any, Any] = kwds self.canceled: bool = False def __lt__(self, other: "Event") -> bool: @@ -135,7 +135,7 @@ class EventLoop: Creates a EventLoop instance. """ self.lock: threading.RLock = threading.RLock() - self.queue: list[Event] = [] + self.queue: List[Event] = [] self.eventnum: int = 0 self.timer: Optional[Timer] = None self.running: bool = False diff --git a/daemon/core/location/geo.py b/daemon/core/location/geo.py index 78308728..6c8eb651 100644 --- a/daemon/core/location/geo.py +++ b/daemon/core/location/geo.py @@ -3,16 +3,16 @@ Provides conversions from x,y,z to lon,lat,alt. """ import logging +from typing import Tuple import pyproj from pyproj import Transformer from core.emulator.enumerations import RegisterTlvs -logger = logging.getLogger(__name__) -SCALE_FACTOR: float = 100.0 -CRS_WGS84: int = 4326 -CRS_PROJ: int = 3857 +SCALE_FACTOR = 100.0 +CRS_WGS84 = 4326 +CRS_PROJ = 3857 class GeoLocation: @@ -34,9 +34,9 @@ class GeoLocation: self.to_geo: Transformer = pyproj.Transformer.from_crs( CRS_PROJ, CRS_WGS84, always_xy=True ) - self.refproj: tuple[float, float, float] = (0.0, 0.0, 0.0) - self.refgeo: tuple[float, float, float] = (0.0, 0.0, 0.0) - self.refxyz: tuple[float, float, float] = (0.0, 0.0, 0.0) + self.refproj: Tuple[float, float, float] = (0.0, 0.0, 0.0) + self.refgeo: Tuple[float, float, float] = (0.0, 0.0, 0.0) + self.refxyz: Tuple[float, float, float] = (0.0, 0.0, 0.0) self.refscale: float = 1.0 def setrefgeo(self, lat: float, lon: float, alt: float) -> None: @@ -83,7 +83,7 @@ class GeoLocation: return 0.0 return SCALE_FACTOR * (value / self.refscale) - def getxyz(self, lat: float, lon: float, alt: float) -> tuple[float, float, float]: + def getxyz(self, lat: float, lon: float, alt: float) -> Tuple[float, float, float]: """ Convert provided lon,lat,alt to x,y,z. @@ -92,7 +92,7 @@ class GeoLocation: :param alt: altitude value :return: x,y,z representation of provided values """ - logger.debug("input lon,lat,alt(%s, %s, %s)", lon, lat, alt) + logging.debug("input lon,lat,alt(%s, %s, %s)", lon, lat, alt) px, py = self.to_pixels.transform(lon, lat) px -= self.refproj[0] py -= self.refproj[1] @@ -100,10 +100,10 @@ class GeoLocation: x = self.meters2pixels(px) + self.refxyz[0] y = -(self.meters2pixels(py) + self.refxyz[1]) z = self.meters2pixels(pz) + self.refxyz[2] - logger.debug("result x,y,z(%s, %s, %s)", x, y, z) + logging.debug("result x,y,z(%s, %s, %s)", x, y, z) return x, y, z - def getgeo(self, x: float, y: float, z: float) -> tuple[float, float, float]: + def getgeo(self, x: float, y: float, z: float) -> Tuple[float, float, float]: """ Convert provided x,y,z to lon,lat,alt. @@ -112,7 +112,7 @@ class GeoLocation: :param z: z value :return: lat,lon,alt representation of provided values """ - logger.debug("input x,y(%s, %s)", x, y) + logging.debug("input x,y(%s, %s)", x, y) x -= self.refxyz[0] y = -(y - self.refxyz[1]) if z is None: @@ -123,5 +123,5 @@ class GeoLocation: py = self.refproj[1] + self.pixels2meters(y) lon, lat = self.to_geo.transform(px, py) alt = self.refgeo[2] + self.pixels2meters(z) - logger.debug("result lon,lat,alt(%s, %s, %s)", lon, lat, alt) + logging.debug("result lon,lat,alt(%s, %s, %s)", lon, lat, alt) return lat, lon, alt diff --git a/daemon/core/location/mobility.py b/daemon/core/location/mobility.py index ebac9bc5..95516ce8 100644 --- a/daemon/core/location/mobility.py +++ b/daemon/core/location/mobility.py @@ -9,30 +9,25 @@ import threading import time from functools import total_ordering from pathlib import Path -from typing import TYPE_CHECKING, Callable, Optional, Union +from typing import TYPE_CHECKING, Callable, Dict, List, Optional, Tuple, Union from core import utils -from core.config import ( - ConfigBool, - ConfigFloat, - ConfigGroup, - ConfigInt, - ConfigString, - ConfigurableOptions, - Configuration, - ModelManager, -) +from core.config import ConfigGroup, ConfigurableOptions, Configuration, ModelManager from core.emane.nodes import EmaneNet from core.emulator.data import EventData, LinkData, LinkOptions -from core.emulator.enumerations import EventTypes, LinkTypes, MessageFlags, RegisterTlvs +from core.emulator.enumerations import ( + ConfigDataTypes, + EventTypes, + LinkTypes, + MessageFlags, + RegisterTlvs, +) from core.errors import CoreError from core.executables import BASH from core.nodes.base import CoreNode from core.nodes.interface import CoreInterface from core.nodes.network import WlanNode -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emulator.session import Session @@ -47,43 +42,6 @@ def get_mobility_node(session: "Session", node_id: int) -> Union[WlanNode, Emane return session.get_node(node_id, EmaneNet) -def get_config_int(current: int, config: dict[str, str], name: str) -> Optional[int]: - """ - Convenience function to get config values as int. - - :param current: current config value to use when one is not provided - :param config: config to get values from - :param name: name of config value to get - :return: current config value when not provided, new value otherwise - """ - value = get_config_float(current, config, name) - if value is not None: - value = int(value) - return value - - -def get_config_float( - current: Union[int, float], config: dict[str, str], name: str -) -> Optional[float]: - """ - Convenience function to get config values as float. - - :param current: current config value to use when one is not provided - :param config: config to get values from - :param name: name of config value to get - :return: current config value when not provided, new value otherwise - """ - value = config.get(name) - if value is not None: - if value == "": - value = None - else: - value = float(value) - else: - value = current - return value - - class MobilityManager(ModelManager): """ Member of session class for handling configuration data for mobility and @@ -112,7 +70,7 @@ class MobilityManager(ModelManager): """ self.config_reset() - def startup(self, node_ids: list[int] = None) -> None: + def startup(self, node_ids: List[int] = None) -> None: """ Session is transitioning from instantiation to runtime state. Instantiate any mobility models that have been configured for a WLAN. @@ -123,7 +81,7 @@ class MobilityManager(ModelManager): if node_ids is None: node_ids = self.nodes() for node_id in node_ids: - logger.debug( + logging.debug( "node(%s) mobility startup: %s", node_id, self.get_all_configs(node_id) ) try: @@ -137,8 +95,8 @@ class MobilityManager(ModelManager): if node.mobility: self.session.event_loop.add_event(0.0, node.mobility.startup) except CoreError: - logger.exception("mobility startup error") - logger.warning( + logging.exception("mobility startup error") + logging.warning( "skipping mobility configuration for unknown node: %s", node_id ) @@ -156,7 +114,7 @@ class MobilityManager(ModelManager): try: node = get_mobility_node(self.session, node_id) except CoreError: - logger.exception( + logging.exception( "ignoring event for model(%s), unknown node(%s)", name, node_id ) return @@ -166,17 +124,17 @@ class MobilityManager(ModelManager): for model in models: cls = self.models.get(model) if not cls: - logger.warning("ignoring event for unknown model '%s'", model) + logging.warning("ignoring event for unknown model '%s'", model) continue if cls.config_type in [RegisterTlvs.WIRELESS, RegisterTlvs.MOBILITY]: model = node.mobility else: continue if model is None: - logger.warning("ignoring event, %s has no model", node.name) + logging.warning("ignoring event, %s has no model", node.name) continue if cls.name != model.name: - logger.warning( + logging.warning( "ignoring event for %s wrong model %s,%s", node.name, cls.name, @@ -225,6 +183,7 @@ class WirelessModel(ConfigurableOptions): """ config_type: RegisterTlvs = RegisterTlvs.WIRELESS + bitmap: str = None position_callback: Callable[[CoreInterface], None] = None def __init__(self, session: "Session", _id: int) -> None: @@ -237,7 +196,7 @@ class WirelessModel(ConfigurableOptions): self.session: "Session" = session self.id: int = _id - def links(self, flags: MessageFlags = MessageFlags.NONE) -> list[LinkData]: + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: """ May be used if the model can populate the GUI with wireless (green) link lines. @@ -247,7 +206,7 @@ class WirelessModel(ConfigurableOptions): """ return [] - def update(self, moved_ifaces: list[CoreInterface]) -> None: + def update(self, moved_ifaces: List[CoreInterface]) -> None: """ Update this wireless model. @@ -256,7 +215,7 @@ class WirelessModel(ConfigurableOptions): """ raise NotImplementedError - def update_config(self, config: dict[str, str]) -> None: + def update_config(self, config: Dict[str, str]) -> None: """ For run-time updates of model config. Returns True when position callback and set link parameters should be invoked. @@ -275,13 +234,40 @@ class BasicRangeModel(WirelessModel): """ name: str = "basic_range" - options: list[Configuration] = [ - ConfigInt(id="range", default="275", label="wireless range (pixels)"), - ConfigInt(id="bandwidth", default="54000000", label="bandwidth (bps)"), - ConfigInt(id="jitter", default="0", label="transmission jitter (usec)"), - ConfigInt(id="delay", default="5000", label="transmission delay (usec)"), - ConfigFloat(id="error", default="0.0", label="loss (%)"), - ConfigBool(id="promiscuous", default="0", label="promiscuous mode"), + options: List[Configuration] = [ + Configuration( + _id="range", + _type=ConfigDataTypes.UINT32, + default="275", + label="wireless range (pixels)", + ), + Configuration( + _id="bandwidth", + _type=ConfigDataTypes.UINT64, + default="54000000", + label="bandwidth (bps)", + ), + Configuration( + _id="jitter", + _type=ConfigDataTypes.UINT64, + default="0", + label="transmission jitter (usec)", + ), + Configuration( + _id="delay", + _type=ConfigDataTypes.UINT64, + default="5000", + label="transmission delay (usec)", + ), + Configuration( + _id="error", _type=ConfigDataTypes.STRING, default="0", label="loss (%)" + ), + Configuration( + _id="promiscuous", + _type=ConfigDataTypes.BOOL, + default="0", + label="promiscuous mode", + ), ] @classmethod @@ -298,7 +284,7 @@ class BasicRangeModel(WirelessModel): super().__init__(session, _id) self.session: "Session" = session self.wlan: WlanNode = session.get_node(_id, WlanNode) - self.iface_to_pos: dict[CoreInterface, tuple[float, float, float]] = {} + self.iface_to_pos: Dict[CoreInterface, Tuple[float, float, float]] = {} self.iface_lock: threading.Lock = threading.Lock() self.range: int = 0 self.bw: Optional[int] = None @@ -307,6 +293,25 @@ class BasicRangeModel(WirelessModel): self.jitter: Optional[int] = None self.promiscuous: bool = False + def _get_config(self, current_value: int, config: Dict[str, str], name: str) -> int: + """ + Convenience for updating value to use from a provided configuration. + + :param current_value: current config value to use when one is not provided + :param config: config to get values from + :param name: name of config value to get + :return: current config value when not provided, new value otherwise + """ + value = config.get(name) + if value is not None: + if value == "": + value = None + else: + value = int(float(value)) + else: + value = current_value + return value + def setlinkparams(self) -> None: """ Apply link parameters to all interfaces. This is invoked from @@ -320,10 +325,9 @@ class BasicRangeModel(WirelessModel): loss=self.loss, jitter=self.jitter, ) - iface.options.update(options) - iface.set_config() + self.wlan.linkconfig(iface, options) - def get_position(self, iface: CoreInterface) -> tuple[float, float, float]: + def get_position(self, iface: CoreInterface) -> Tuple[float, float, float]: """ Retrieve network interface position. @@ -343,16 +347,18 @@ class BasicRangeModel(WirelessModel): :return: nothing """ x, y, z = iface.node.position.get() - with self.iface_lock: - self.iface_to_pos[iface] = (x, y, z) - if x is None or y is None: - return - for iface2 in self.iface_to_pos: - self.calclink(iface, iface2) + self.iface_lock.acquire() + self.iface_to_pos[iface] = (x, y, z) + if x is None or y is None: + self.iface_lock.release() + return + for iface2 in self.iface_to_pos: + self.calclink(iface, iface2) + self.iface_lock.release() position_callback = set_position - def update(self, moved_ifaces: list[CoreInterface]) -> None: + def update(self, moved_ifaces: List[CoreInterface]) -> None: """ Node positions have changed without recalc. Update positions from node.position, then re-calculate links for those that have moved. @@ -386,33 +392,38 @@ class BasicRangeModel(WirelessModel): """ if iface == iface2: return + try: x, y, z = self.iface_to_pos[iface] x2, y2, z2 = self.iface_to_pos[iface2] + if x2 is None or y2 is None: return + d = self.calcdistance((x, y, z), (x2, y2, z2)) + # ordering is important, to keep the wlan._linked dict organized a = min(iface, iface2) b = max(iface, iface2) - with self.wlan.linked_lock: - linked = self.wlan.is_linked(a, b) + + with self.wlan._linked_lock: + linked = self.wlan.linked(a, b) if d > self.range: if linked: - logger.debug("was linked, unlinking") + logging.debug("was linked, unlinking") self.wlan.unlink(a, b) self.sendlinkmsg(a, b, unlink=True) else: if not linked: - logger.debug("was not linked, linking") + logging.debug("was not linked, linking") self.wlan.link(a, b) self.sendlinkmsg(a, b) except KeyError: - logger.exception("error getting interfaces during calclink") + logging.exception("error getting interfaces during calclinkS") @staticmethod def calcdistance( - p1: tuple[float, float, float], p2: tuple[float, float, float] + p1: Tuple[float, float, float], p2: Tuple[float, float, float] ) -> float: """ Calculate the distance between two three-dimensional points. @@ -428,22 +439,22 @@ class BasicRangeModel(WirelessModel): c = p1[2] - p2[2] return math.hypot(math.hypot(a, b), c) - def update_config(self, config: dict[str, str]) -> None: + def update_config(self, config: Dict[str, str]) -> None: """ Configuration has changed during runtime. :param config: values to update configuration :return: nothing """ - self.range = get_config_int(self.range, config, "range") + self.range = self._get_config(self.range, config, "range") if self.range is None: self.range = 0 - logger.debug("wlan %s set range to %s", self.wlan.name, self.range) - self.bw = get_config_int(self.bw, config, "bandwidth") - self.delay = get_config_int(self.delay, config, "delay") - self.loss = get_config_float(self.loss, config, "error") - self.jitter = get_config_int(self.jitter, config, "jitter") - promiscuous = config.get("promiscuous", "0") == "1" + logging.debug("wlan %s set range to %s", self.wlan.name, self.range) + self.bw = self._get_config(self.bw, config, "bandwidth") + self.delay = self._get_config(self.delay, config, "delay") + self.loss = self._get_config(self.loss, config, "error") + self.jitter = self._get_config(self.jitter, config, "jitter") + promiscuous = config["promiscuous"] == "1" if self.promiscuous and not promiscuous: self.wlan.net_client.set_mac_learning(self.wlan.brname, LEARNING_ENABLED) elif not self.promiscuous and promiscuous: @@ -487,7 +498,7 @@ class BasicRangeModel(WirelessModel): link_data = self.create_link_data(iface, iface2, message_type) self.session.broadcast_link(link_data) - def links(self, flags: MessageFlags = MessageFlags.NONE) -> list[LinkData]: + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: """ Return a list of wireless link messages for when the GUI reconnects. @@ -495,10 +506,10 @@ class BasicRangeModel(WirelessModel): :return: all link data """ all_links = [] - with self.wlan.linked_lock: - for a in self.wlan.linked: - for b in self.wlan.linked[a]: - if self.wlan.linked[a][b]: + with self.wlan._linked_lock: + for a in self.wlan._linked: + for b in self.wlan._linked[a]: + if self.wlan._linked[a][b]: all_links.append(self.create_link_data(a, b, flags)) return all_links @@ -513,7 +524,7 @@ class WayPoint: self, _time: float, node_id: int, - coords: tuple[float, float, Optional[float]], + coords: Tuple[float, float, Optional[float]], speed: float, ) -> None: """ @@ -526,7 +537,7 @@ class WayPoint: """ self.time: float = _time self.node_id: int = node_id - self.coords: tuple[float, float, Optional[float]] = coords + self.coords: Tuple[float, float, Optional[float]] = coords self.speed: float = speed def __eq__(self, other: "WayPoint") -> bool: @@ -563,10 +574,10 @@ class WayPointMobility(WirelessModel): """ super().__init__(session=session, _id=_id) self.state: int = self.STATE_STOPPED - self.queue: list[WayPoint] = [] - self.queue_copy: list[WayPoint] = [] - self.points: dict[int, WayPoint] = {} - self.initial: dict[int, WayPoint] = {} + self.queue: List[WayPoint] = [] + self.queue_copy: List[WayPoint] = [] + self.points: Dict[int, WayPoint] = {} + self.initial: Dict[int, WayPoint] = {} self.lasttime: Optional[float] = None self.endtime: Optional[int] = None self.timezero: float = 0.0 @@ -627,7 +638,7 @@ class WayPointMobility(WirelessModel): moved_ifaces.append(iface) # calculate all ranges after moving nodes; this saves calculations - self.net.wireless_model.update(moved_ifaces) + self.net.model.update(moved_ifaces) # TODO: check session state self.session.event_loop.add_event(0.001 * self.refresh_ms, self.runround) @@ -705,7 +716,7 @@ class WayPointMobility(WirelessModel): x, y, z = self.initial[node.id].coords self.setnodeposition(node, x, y, z) moved_ifaces.append(iface) - self.net.wireless_model.update(moved_ifaces) + self.net.model.update(moved_ifaces) def addwaypoint( self, @@ -855,19 +866,48 @@ class Ns2ScriptedMobility(WayPointMobility): """ name: str = "ns2script" - options: list[Configuration] = [ - ConfigString(id="file", label="mobility script file"), - ConfigInt(id="refresh_ms", default="50", label="refresh time (ms)"), - ConfigBool(id="loop", default="1", label="loop"), - ConfigString(id="autostart", label="auto-start seconds (0.0 for runtime)"), - ConfigString(id="map", label="node mapping (optional, e.g. 0:1,1:2,2:3)"), - ConfigString(id="script_start", label="script file to run upon start"), - ConfigString(id="script_pause", label="script file to run upon pause"), - ConfigString(id="script_stop", label="script file to run upon stop"), + options: List[Configuration] = [ + Configuration( + _id="file", _type=ConfigDataTypes.STRING, label="mobility script file" + ), + Configuration( + _id="refresh_ms", + _type=ConfigDataTypes.UINT32, + default="50", + label="refresh time (ms)", + ), + Configuration( + _id="loop", _type=ConfigDataTypes.BOOL, default="1", label="loop" + ), + Configuration( + _id="autostart", + _type=ConfigDataTypes.STRING, + label="auto-start seconds (0.0 for runtime)", + ), + Configuration( + _id="map", + _type=ConfigDataTypes.STRING, + label="node mapping (optional, e.g. 0:1,1:2,2:3)", + ), + Configuration( + _id="script_start", + _type=ConfigDataTypes.STRING, + label="script file to run upon start", + ), + Configuration( + _id="script_pause", + _type=ConfigDataTypes.STRING, + label="script file to run upon pause", + ), + Configuration( + _id="script_stop", + _type=ConfigDataTypes.STRING, + label="script file to run upon stop", + ), ] @classmethod - def config_groups(cls) -> list[ConfigGroup]: + def config_groups(cls) -> List[ConfigGroup]: return [ ConfigGroup("ns-2 Mobility Script Parameters", 1, len(cls.configurations())) ] @@ -880,22 +920,24 @@ class Ns2ScriptedMobility(WayPointMobility): :param _id: object id """ super().__init__(session, _id) - self.file: Optional[Path] = None + self.file: Optional[str] = None + self.refresh_ms: Optional[int] = None + self.loop: Optional[bool] = None self.autostart: Optional[str] = None - self.nodemap: dict[int, int] = {} + self.nodemap: Dict[int, int] = {} self.script_start: Optional[str] = None self.script_pause: Optional[str] = None self.script_stop: Optional[str] = None - def update_config(self, config: dict[str, str]) -> None: - self.file = Path(config["file"]) - logger.info( + def update_config(self, config: Dict[str, str]) -> None: + self.file = config["file"] + logging.info( "ns-2 scripted mobility configured for WLAN %d using file: %s", self.id, self.file, ) self.refresh_ms = int(config["refresh_ms"]) - self.loop = config["loop"] == "1" + self.loop = config["loop"].lower() == "on" self.autostart = config["autostart"] self.parsemap(config["map"]) self.script_start = config["script_start"] @@ -913,15 +955,15 @@ class Ns2ScriptedMobility(WayPointMobility): :return: nothing """ - file_path = self.findfile(self.file) + filename = self.findfile(self.file) try: - f = file_path.open("r") - except OSError: - logger.exception( + f = open(filename, "r") + except IOError: + logging.exception( "ns-2 scripted mobility failed to load file: %s", self.file ) return - logger.info("reading ns-2 script file: %s", file_path) + logging.info("reading ns-2 script file: %s", filename) ln = 0 ix = iy = iz = None inodenum = None @@ -937,13 +979,13 @@ class Ns2ScriptedMobility(WayPointMobility): # waypoints: # $ns_ at 1.00 "$node_(6) setdest 500.0 178.0 25.0" parts = line.split() - line_time = float(parts[2]) + time = float(parts[2]) nodenum = parts[3][1 + parts[3].index("(") : parts[3].index(")")] x = float(parts[5]) y = float(parts[6]) z = None speed = float(parts[7].strip('"')) - self.addwaypoint(line_time, self.map(nodenum), x, y, z, speed) + self.addwaypoint(time, self.map(nodenum), x, y, z, speed) elif line[:7] == "$node_(": # initial position (time=0, speed=0): # $node_(6) set X_ 780.0 @@ -964,38 +1006,38 @@ class Ns2ScriptedMobility(WayPointMobility): else: raise ValueError except ValueError: - logger.exception( + logging.exception( "skipping line %d of file %s '%s'", ln, self.file, line ) continue if ix is not None and iy is not None: self.addinitial(self.map(inodenum), ix, iy, iz) - def findfile(self, file_path: Path) -> Path: + def findfile(self, file_name: str) -> str: """ Locate a script file. If the specified file doesn't exist, look in the same directory as the scenario file, or in gui directories. - :param file_path: file name to find + :param file_name: file name to find :return: absolute path to the file :raises CoreError: when file is not found """ - file_path = file_path.expanduser() + file_path = Path(file_name).expanduser() if file_path.exists(): - return file_path - if self.session.file_path: - session_file_path = self.session.file_path.parent / file_path - if session_file_path.exists(): - return session_file_path + return str(file_path) + if self.session.file_name: + file_path = Path(self.session.file_name).parent / file_name + if file_path.exists(): + return str(file_path) if self.session.user: user_path = Path(f"~{self.session.user}").expanduser() - configs_path = user_path / ".core" / "configs" / file_path - if configs_path.exists(): - return configs_path - mobility_path = user_path / ".coregui" / "mobility" / file_path - if mobility_path.exists(): - return mobility_path - raise CoreError(f"invalid file: {file_path}") + file_path = user_path / ".core" / "configs" / file_name + if file_path.exists(): + return str(file_path) + file_path = user_path / ".coregui" / "mobility" / file_name + if file_path.exists(): + return str(file_path) + raise CoreError(f"invalid file: {file_name}") def parsemap(self, mapstr: str) -> None: """ @@ -1007,6 +1049,7 @@ class Ns2ScriptedMobility(WayPointMobility): self.nodemap = {} if mapstr.strip() == "": return + for pair in mapstr.split(","): parts = pair.split(":") try: @@ -1014,7 +1057,7 @@ class Ns2ScriptedMobility(WayPointMobility): raise ValueError self.nodemap[int(parts[0])] = int(parts[1]) except ValueError: - logger.exception("ns-2 mobility node map error") + logging.exception("ns-2 mobility node map error") def map(self, nodenum: str) -> int: """ @@ -1036,19 +1079,19 @@ class Ns2ScriptedMobility(WayPointMobility): :return: nothing """ if self.autostart == "": - logger.info("not auto-starting ns-2 script for %s", self.net.name) + logging.info("not auto-starting ns-2 script for %s", self.net.name) return try: t = float(self.autostart) except ValueError: - logger.exception( + logging.exception( "Invalid auto-start seconds specified '%s' for %s", self.autostart, self.net.name, ) return self.movenodesinitial() - logger.info("scheduling ns-2 script for %s autostart at %s", self.net.name, t) + logging.info("scheduling ns-2 script for %s autostart at %s", self.net.name, t) self.state = self.STATE_RUNNING self.session.event_loop.add_event(t, self.run) @@ -1058,7 +1101,7 @@ class Ns2ScriptedMobility(WayPointMobility): :return: nothing """ - logger.info("starting script: %s", self.file) + logging.info("starting script: %s", self.file) laststate = self.state super().start() if laststate == self.STATE_PAUSED: @@ -1079,7 +1122,7 @@ class Ns2ScriptedMobility(WayPointMobility): :return: nothing """ - logger.info("pausing script: %s", self.file) + logging.info("pausing script: %s", self.file) super().pause() self.statescript("pause") @@ -1091,7 +1134,7 @@ class Ns2ScriptedMobility(WayPointMobility): position :return: nothing """ - logger.info("stopping script: %s", self.file) + logging.info("stopping script: %s", self.file) super().stop(move_initial=move_initial) self.statescript("stop") @@ -1111,7 +1154,8 @@ class Ns2ScriptedMobility(WayPointMobility): filename = self.script_stop if filename is None or filename == "": return - filename = Path(filename) filename = self.findfile(filename) args = f"{BASH} {filename} {typestr}" - utils.cmd(args, cwd=self.session.directory, env=self.session.get_environment()) + utils.cmd( + args, cwd=self.session.session_dir, env=self.session.get_environment() + ) diff --git a/daemon/core/nodes/base.py b/daemon/core/nodes/base.py index e59a89e4..9a301432 100644 --- a/daemon/core/nodes/base.py +++ b/daemon/core/nodes/base.py @@ -3,124 +3,32 @@ Defines the base logic for nodes used within core. """ import abc import logging -import shlex +import os import shutil import threading -from dataclasses import dataclass, field -from pathlib import Path from threading import RLock -from typing import TYPE_CHECKING, Optional, Union +from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Type, Union import netaddr from core import utils from core.configservice.dependencies import ConfigServiceDependencies -from core.emulator.data import InterfaceData, LinkOptions +from core.emulator.data import InterfaceData, LinkData, LinkOptions +from core.emulator.enumerations import LinkTypes, MessageFlags, NodeTypes from core.errors import CoreCommandError, CoreError -from core.executables import BASH, MOUNT, TEST, VCMD, VNODED -from core.nodes.interface import DEFAULT_MTU, CoreInterface +from core.executables import MOUNT, TEST, VNODED +from core.nodes.client import VnodeClient +from core.nodes.interface import CoreInterface, TunTap, Veth from core.nodes.netclient import LinuxNetClient, get_net_client -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emulator.distributed import DistributedServer from core.emulator.session import Session from core.configservice.base import ConfigService from core.services.coreservices import CoreService - CoreServices = list[Union[CoreService, type[CoreService]]] - ConfigServiceType = type[ConfigService] - -PRIVATE_DIRS: list[Path] = [Path("/var/run"), Path("/var/log")] - - -@dataclass -class Position: - """ - Helper class for Cartesian coordinate position - """ - - x: float = 0.0 - y: float = 0.0 - z: float = 0.0 - lon: float = None - lat: float = None - alt: float = None - - def set(self, x: float = None, y: float = None, z: float = None) -> bool: - """ - Returns True if the position has actually changed. - - :param x: x position - :param y: y position - :param z: z position - :return: True if position changed, False otherwise - """ - if self.x == x and self.y == y and self.z == z: - return False - self.x = x - self.y = y - self.z = z - return True - - def get(self) -> tuple[float, float, float]: - """ - Retrieve x,y,z position. - - :return: x,y,z position tuple - """ - return self.x, self.y, self.z - - def has_geo(self) -> bool: - return all(x is not None for x in [self.lon, self.lat, self.alt]) - - def set_geo(self, lon: float, lat: float, alt: float) -> None: - """ - Set geo position lon, lat, alt. - - :param lon: longitude value - :param lat: latitude value - :param alt: altitude value - :return: nothing - """ - self.lon = lon - self.lat = lat - self.alt = alt - - def get_geo(self) -> tuple[float, float, float]: - """ - Retrieve current geo position lon, lat, alt. - - :return: lon, lat, alt position tuple - """ - return self.lon, self.lat, self.alt - - -@dataclass -class NodeOptions: - """ - Base options for configuring a node. - """ - - canvas: int = None - """id of canvas for display within gui""" - icon: str = None - """custom icon for display, None for default""" - - -@dataclass -class CoreNodeOptions(NodeOptions): - model: str = "PC" - """model is used for providing a default set of services""" - services: list[str] = field(default_factory=list) - """services to start within node""" - config_services: list[str] = field(default_factory=list) - """config services to start within node""" - directory: Path = None - """directory to define node, defaults to path under the session directory""" - legacy: bool = False - """legacy nodes default to standard services""" + CoreServices = List[Union[CoreService, Type[CoreService]]] + ConfigServiceType = Type[ConfigService] class NodeBase(abc.ABC): @@ -128,13 +36,14 @@ class NodeBase(abc.ABC): Base class for CORE nodes (nodes and networks) """ + apitype: Optional[NodeTypes] = None + def __init__( self, session: "Session", _id: int = None, name: str = None, server: "DistributedServer" = None, - options: NodeOptions = None, ) -> None: """ Creates a NodeBase instance. @@ -144,29 +53,27 @@ class NodeBase(abc.ABC): :param name: object name :param server: remote server node will run on, default is None for localhost - :param options: options to create node with """ + self.session: "Session" = session - self.id: int = _id if _id is not None else self.session.next_node_id() - self.name: str = name or f"{self.__class__.__name__}{self.id}" + if _id is None: + _id = session.next_node_id() + self.id: int = _id + if name is None: + name = f"o{self.id}" + self.name: str = name self.server: "DistributedServer" = server - self.model: Optional[str] = None + self.type: Optional[str] = None self.services: CoreServices = [] - self.ifaces: dict[int, CoreInterface] = {} + self.ifaces: Dict[int, CoreInterface] = {} self.iface_id: int = 0 + self.canvas: Optional[int] = None + self.icon: Optional[str] = None self.position: Position = Position() self.up: bool = False - self.lock: RLock = RLock() self.net_client: LinuxNetClient = get_net_client( self.session.use_ovs(), self.host_cmd ) - options = options if options else NodeOptions() - self.canvas: Optional[int] = options.canvas - self.icon: Optional[str] = options.icon - - @classmethod - def create_options(cls) -> NodeOptions: - return NodeOptions() @abc.abstractmethod def startup(self) -> None: @@ -186,23 +93,11 @@ class NodeBase(abc.ABC): """ raise NotImplementedError - @abc.abstractmethod - def adopt_iface(self, iface: CoreInterface, name: str) -> None: - """ - Adopt an interface, placing within network namespacing for containers - and setting to bridge masters for network like nodes. - - :param iface: interface to adopt - :param name: proper name to use for interface - :return: nothing - """ - raise NotImplementedError - def host_cmd( self, args: str, - env: dict[str, str] = None, - cwd: Path = None, + env: Dict[str, str] = None, + cwd: str = None, wait: bool = True, shell: bool = False, ) -> str: @@ -222,19 +117,6 @@ class NodeBase(abc.ABC): else: return self.server.remote_cmd(args, env, cwd, wait) - def cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: - """ - Runs a command that is in the context of a node, default is to run a standard - host command. - - :param args: command to run - :param wait: True to wait for status, False otherwise - :param shell: True to use shell, False otherwise - :return: combined stdout and stderr - :raises CoreCommandError: when a non-zero exit status occurs - """ - return self.host_cmd(args, wait=wait, shell=shell) - def setposition(self, x: float = None, y: float = None, z: float = None) -> bool: """ Set the (x,y,z) position of the object. @@ -246,7 +128,7 @@ class NodeBase(abc.ABC): """ return self.position.set(x=x, y=y, z=z) - def getposition(self) -> tuple[float, float, float]: + def getposition(self) -> Tuple[float, float, float]: """ Return an (x,y,z) tuple representing this object's position. @@ -254,71 +136,6 @@ class NodeBase(abc.ABC): """ return self.position.get() - def create_iface( - self, iface_data: InterfaceData = None, options: LinkOptions = None - ) -> CoreInterface: - """ - Creates an interface and adopts it to a node. - - :param iface_data: data to create interface with - :param options: options to create interface with - :return: created interface - """ - with self.lock: - if iface_data and iface_data.id is not None: - if iface_data.id in self.ifaces: - raise CoreError( - f"node({self.id}) interface({iface_data.id}) already exists" - ) - iface_id = iface_data.id - else: - iface_id = self.next_iface_id() - mtu = DEFAULT_MTU - if iface_data and iface_data.mtu is not None: - mtu = iface_data.mtu - unique_name = f"{self.id}.{iface_id}.{self.session.short_session_id()}" - name = f"veth{unique_name}" - localname = f"beth{unique_name}" - iface = CoreInterface( - iface_id, - name, - localname, - self.session.use_ovs(), - mtu, - self, - self.server, - ) - if iface_data: - if iface_data.mac: - iface.set_mac(iface_data.mac) - for ip in iface_data.get_ips(): - iface.add_ip(ip) - if iface_data.name: - name = iface_data.name - if options: - iface.options.update(options) - self.ifaces[iface_id] = iface - if self.up: - iface.startup() - self.adopt_iface(iface, name) - else: - iface.name = name - return iface - - def delete_iface(self, iface_id: int) -> CoreInterface: - """ - Delete an interface. - - :param iface_id: interface id to delete - :return: the removed interface - """ - if iface_id not in self.ifaces: - raise CoreError(f"node({self.name}) interface({iface_id}) does not exist") - iface = self.ifaces.pop(iface_id) - logger.info("node(%s) removing interface(%s)", self.name, iface.name) - iface.shutdown() - return iface - def get_iface(self, iface_id: int) -> CoreInterface: """ Retrieve interface based on id. @@ -331,7 +148,7 @@ class NodeBase(abc.ABC): raise CoreError(f"node({self.name}) does not have interface({iface_id})") return self.ifaces[iface_id] - def get_ifaces(self, control: bool = True) -> list[CoreInterface]: + def get_ifaces(self, control: bool = True) -> List[CoreInterface]: """ Retrieve sorted list of interfaces, optionally do not include control interfaces. @@ -342,7 +159,7 @@ class NodeBase(abc.ABC): ifaces = [] for iface_id in sorted(self.ifaces): iface = self.ifaces[iface_id] - if not control and iface.control: + if not control and getattr(iface, "control", False): continue ifaces.append(iface) return ifaces @@ -371,6 +188,15 @@ class NodeBase(abc.ABC): self.iface_id += 1 return iface_id + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: + """ + Build link data for this node. + + :param flags: message flags + :return: list of link data + """ + return [] + class CoreNodeBase(NodeBase): """ @@ -383,7 +209,6 @@ class CoreNodeBase(NodeBase): _id: int = None, name: str = None, server: "DistributedServer" = None, - options: NodeOptions = None, ) -> None: """ Create a CoreNodeBase instance. @@ -394,27 +219,25 @@ class CoreNodeBase(NodeBase): :param server: remote server node will run on, default is None for localhost """ - super().__init__(session, _id, name, server, options) - self.config_services: dict[str, "ConfigService"] = {} - self.directory: Optional[Path] = None + super().__init__(session, _id, name, server) + self.config_services: Dict[str, "ConfigService"] = {} + self.nodedir: Optional[str] = None self.tmpnodedir: bool = False @abc.abstractmethod - def create_dir(self, dir_path: Path) -> None: - """ - Create a node private directory. - - :param dir_path: path to create - :return: nothing - """ + def startup(self) -> None: raise NotImplementedError @abc.abstractmethod - def create_file(self, file_path: Path, contents: str, mode: int = 0o644) -> None: + def shutdown(self) -> None: + raise NotImplementedError + + @abc.abstractmethod + def nodefile(self, filename: str, contents: str, mode: int = 0o644) -> None: """ Create a node file with a given mode. - :param file_path: name of file to create + :param filename: name of file to create :param contents: contents of file :param mode: mode for file :return: nothing @@ -422,15 +245,27 @@ class CoreNodeBase(NodeBase): raise NotImplementedError @abc.abstractmethod - def copy_file(self, src_path: Path, dst_path: Path, mode: int = None) -> None: + def addfile(self, srcname: str, filename: str) -> None: """ - Copy source file to node host destination, updating the file mode when - provided. + Add a file. - :param src_path: source file to copy - :param dst_path: node host destination - :param mode: file mode + :param srcname: source file name + :param filename: file name to add :return: nothing + :raises CoreCommandError: when a non-zero exit status occurs + """ + raise NotImplementedError + + @abc.abstractmethod + def cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: + """ + Runs a command within a node container. + + :param args: command to run + :param wait: True to wait for status, False otherwise + :param shell: True to use shell, False otherwise + :return: combined stdout and stderr + :raises CoreCommandError: when a non-zero exit status occurs """ raise NotImplementedError @@ -444,6 +279,19 @@ class CoreNodeBase(NodeBase): """ raise NotImplementedError + @abc.abstractmethod + def new_iface( + self, net: "CoreNetworkBase", iface_data: InterfaceData + ) -> CoreInterface: + """ + Create a new interface. + + :param net: network to associate with + :param iface_data: interface data for new interface + :return: interface index + """ + raise NotImplementedError + @abc.abstractmethod def path_exists(self, path: str) -> bool: """ @@ -454,21 +302,6 @@ class CoreNodeBase(NodeBase): """ raise NotImplementedError - def host_path(self, path: Path, is_dir: bool = False) -> Path: - """ - Return the name of a node's file on the host filesystem. - - :param path: path to translate to host path - :param is_dir: True if path is a directory path, False otherwise - :return: path to file - """ - if is_dir: - directory = str(path).strip("/").replace("/", ".") - return self.directory / directory - else: - directory = str(path.parent).strip("/").replace("/", ".") - return self.directory / directory / path.name - def add_config_service(self, service_class: "ConfigServiceType") -> None: """ Adds a configuration service to the node. @@ -481,7 +314,7 @@ class CoreNodeBase(NodeBase): raise CoreError(f"node({self.name}) already has service({name})") self.config_services[name] = service_class(self) - def set_service_config(self, name: str, data: dict[str, str]) -> None: + def set_service_config(self, name: str, data: Dict[str, str]) -> None: """ Sets configuration service custom config data. @@ -506,24 +339,15 @@ class CoreNodeBase(NodeBase): for service in startup_path: service.start() - def stop_config_services(self) -> None: - """ - Stop all configuration services. - - :return: nothing - """ - for service in self.config_services.values(): - service.stop() - def makenodedir(self) -> None: """ Create the node directory. :return: nothing """ - if self.directory is None: - self.directory = self.session.directory / f"{self.name}.conf" - self.host_cmd(f"mkdir -p {self.directory}") + if self.nodedir is None: + self.nodedir = os.path.join(self.session.session_dir, self.name + ".conf") + self.host_cmd(f"mkdir -p {self.nodedir}") self.tmpnodedir = True else: self.tmpnodedir = False @@ -534,11 +358,59 @@ class CoreNodeBase(NodeBase): :return: nothing """ - preserve = self.session.options.get_int("preservedir") == 1 + preserve = self.session.options.get_config("preservedir") == "1" if preserve: return if self.tmpnodedir: - self.host_cmd(f"rm -rf {self.directory}") + self.host_cmd(f"rm -rf {self.nodedir}") + + def add_iface(self, iface: CoreInterface, iface_id: int) -> None: + """ + Add network interface to node and set the network interface index if successful. + + :param iface: network interface to add + :param iface_id: interface id + :return: nothing + """ + if iface_id in self.ifaces: + raise CoreError(f"interface({iface_id}) already exists") + self.ifaces[iface_id] = iface + iface.node_id = iface_id + + def delete_iface(self, iface_id: int) -> None: + """ + Delete a network interface + + :param iface_id: interface index to delete + :return: nothing + """ + if iface_id not in self.ifaces: + raise CoreError(f"node({self.name}) interface({iface_id}) does not exist") + iface = self.ifaces.pop(iface_id) + logging.info("node(%s) removing interface(%s)", self.name, iface.name) + iface.detachnet() + iface.shutdown() + + def attachnet(self, iface_id: int, net: "CoreNetworkBase") -> None: + """ + Attach a network. + + :param iface_id: interface of index to attach + :param net: network to attach + :return: nothing + """ + iface = self.get_iface(iface_id) + iface.attachnet(net) + + def detachnet(self, iface_id: int) -> None: + """ + Detach network interface. + + :param iface_id: interface id to detach + :return: nothing + """ + iface = self.get_iface(iface_id) + iface.detachnet() def setposition(self, x: float = None, y: float = None, z: float = None) -> None: """ @@ -554,19 +426,40 @@ class CoreNodeBase(NodeBase): for iface in self.get_ifaces(): iface.setposition() + def commonnets( + self, node: "CoreNodeBase", want_ctrl: bool = False + ) -> List[Tuple["CoreNetworkBase", CoreInterface, CoreInterface]]: + """ + Given another node or net object, return common networks between + this node and that object. A list of tuples is returned, with each tuple + consisting of (network, interface1, interface2). + + :param node: node to get common network with + :param want_ctrl: flag set to determine if control network are wanted + :return: tuples of common networks + """ + common = [] + for iface1 in self.get_ifaces(control=want_ctrl): + for iface2 in node.get_ifaces(): + if iface1.net == iface2.net: + common.append((iface1.net, iface1, iface2)) + return common + class CoreNode(CoreNodeBase): """ Provides standard core node logic. """ + apitype: NodeTypes = NodeTypes.DEFAULT + def __init__( self, session: "Session", _id: int = None, name: str = None, + nodedir: str = None, server: "DistributedServer" = None, - options: CoreNodeOptions = None, ) -> None: """ Create a CoreNode instance. @@ -574,37 +467,22 @@ class CoreNode(CoreNodeBase): :param session: core session instance :param _id: object id :param name: object name + :param nodedir: node directory :param server: remote server node will run on, default is None for localhost - :param options: options to create node with """ - options = options or CoreNodeOptions() - super().__init__(session, _id, name, server, options) - self.directory: Optional[Path] = options.directory - self.ctrlchnlname: Path = self.session.directory / self.name + super().__init__(session, _id, name, server) + self.nodedir: Optional[str] = nodedir + self.ctrlchnlname: str = os.path.abspath( + os.path.join(self.session.session_dir, self.name) + ) + self.client: Optional[VnodeClient] = None self.pid: Optional[int] = None - self._mounts: list[tuple[Path, Path]] = [] + self.lock: RLock = RLock() + self._mounts: List[Tuple[str, str]] = [] self.node_net_client: LinuxNetClient = self.create_node_net_client( self.session.use_ovs() ) - options = options or CoreNodeOptions() - self.model: Optional[str] = options.model - # setup services - if options.legacy or options.services: - logger.debug("set node type: %s", self.model) - self.session.services.add_services(self, self.model, options.services) - # add config services - config_services = options.config_services - if not options.legacy and not config_services and not options.services: - config_services = self.session.services.default_services.get(self.model, []) - logger.info("setting node config services: %s", config_services) - for name in config_services: - service_class = self.session.service_manager.get_service(name) - self.add_config_service(service_class) - - @classmethod - def create_options(cls) -> CoreNodeOptions: - return CoreNodeOptions() def create_node_net_client(self, use_ovs: bool) -> LinuxNetClient: """ @@ -640,30 +518,39 @@ class CoreNode(CoreNodeBase): self.makenodedir() if self.up: raise ValueError("starting a node that is already up") + # create a new namespace for this node using vnoded vnoded = ( f"{VNODED} -v -c {self.ctrlchnlname} -l {self.ctrlchnlname}.log " f"-p {self.ctrlchnlname}.pid" ) - if self.directory: - vnoded += f" -C {self.directory}" + if self.nodedir: + vnoded += f" -C {self.nodedir}" env = self.session.get_environment(state=False) env["NODE_NUMBER"] = str(self.id) env["NODE_NAME"] = str(self.name) + output = self.host_cmd(vnoded, env=env) self.pid = int(output) - logger.debug("node(%s) pid: %s", self.name, self.pid) + logging.debug("node(%s) pid: %s", self.name, self.pid) + + # create vnode client + self.client = VnodeClient(self.name, self.ctrlchnlname) + # bring up the loopback interface - logger.debug("bringing up loopback interface") + logging.debug("bringing up loopback interface") self.node_net_client.device_up("lo") + # set hostname for node - logger.debug("setting hostname: %s", self.name) + logging.debug("setting hostname: %s", self.name) self.node_net_client.set_hostname(self.name) + # mark node as up self.up = True + # create private directories - for dir_path in PRIVATE_DIRS: - self.create_dir(dir_path) + self.privatedir("/var/run") + self.privatedir("/var/log") def shutdown(self) -> None: """ @@ -674,48 +561,38 @@ class CoreNode(CoreNodeBase): # nothing to do if node is not up if not self.up: return + with self.lock: try: # unmount all targets (NOTE: non-persistent mount namespaces are # removed by the kernel when last referencing process is killed) self._mounts = [] + # shutdown all interfaces for iface in self.get_ifaces(): - try: - self.node_net_client.device_flush(iface.name) - except CoreCommandError: - pass iface.shutdown() + # kill node process if present try: self.host_cmd(f"kill -9 {self.pid}") except CoreCommandError: - logger.exception("error killing process") + logging.exception("error killing process") + # remove node directory if present try: self.host_cmd(f"rm -rf {self.ctrlchnlname}") except CoreCommandError: - logger.exception("error removing node directory") + logging.exception("error removing node directory") + # clear interface data, close client, and mark self and not up self.ifaces.clear() + self.client.close() self.up = False except OSError: - logger.exception("error during shutdown") + logging.exception("error during shutdown") finally: self.rmnodedir() - def create_cmd(self, args: str, shell: bool = False) -> str: - """ - Create command used to run commands within the context of a node. - - :param args: command arguments - :param shell: True to run shell like, False otherwise - :return: node command - """ - if shell: - args = f"{BASH} -c {shlex.quote(args)}" - return f"{VCMD} -c {self.ctrlchnlname} -- {args}" - def cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: """ Runs a command that is used to configure and setup the network within a @@ -727,10 +604,10 @@ class CoreNode(CoreNodeBase): :return: combined stdout and stderr :raises CoreCommandError: when a non-zero exit status occurs """ - args = self.create_cmd(args, shell) if self.server is None: - return utils.cmd(args, wait=wait, shell=shell) + return self.client.check_cmd(args, wait=wait, shell=shell) else: + args = self.client.create_cmd(args, shell) return self.server.remote_cmd(args, wait=wait) def path_exists(self, path: str) -> bool: @@ -753,161 +630,307 @@ class CoreNode(CoreNodeBase): :param sh: shell to execute command in :return: str """ - terminal = self.create_cmd(sh) + terminal = self.client.create_cmd(sh) if self.server is None: return terminal else: return f"ssh -X -f {self.server.host} xterm -e {terminal}" - def create_dir(self, dir_path: Path) -> None: + def privatedir(self, path: str) -> None: """ - Create a node private directory. + Create a private directory. - :param dir_path: path to create + :param path: path to create :return: nothing """ - if not dir_path.is_absolute(): - raise CoreError(f"private directory path not fully qualified: {dir_path}") - logger.debug("node(%s) creating private directory: %s", self.name, dir_path) - parent_path = self._find_parent_path(dir_path) - if parent_path: - self.host_cmd(f"mkdir -p {parent_path}") - else: - host_path = self.host_path(dir_path, is_dir=True) - self.host_cmd(f"mkdir -p {host_path}") - self.mount(host_path, dir_path) + if path[0] != "/": + raise ValueError(f"path not fully qualified: {path}") + hostpath = os.path.join( + self.nodedir, os.path.normpath(path).strip("/").replace("/", ".") + ) + self.host_cmd(f"mkdir -p {hostpath}") + self.mount(hostpath, path) - def mount(self, src_path: Path, target_path: Path) -> None: + def mount(self, source: str, target: str) -> None: """ Create and mount a directory. - :param src_path: source directory to mount - :param target_path: target directory to create + :param source: source directory to mount + :param target: target directory to create :return: nothing :raises CoreCommandError: when a non-zero exit status occurs """ - logger.debug("node(%s) mounting: %s at %s", self.name, src_path, target_path) - self.cmd(f"mkdir -p {target_path}") - self.cmd(f"{MOUNT} -n --bind {src_path} {target_path}") - self._mounts.append((src_path, target_path)) + source = os.path.abspath(source) + logging.debug("node(%s) mounting: %s at %s", self.name, source, target) + self.cmd(f"mkdir -p {target}") + self.cmd(f"{MOUNT} -n --bind {source} {target}") + self._mounts.append((source, target)) - def _find_parent_path(self, path: Path) -> Optional[Path]: + def next_iface_id(self) -> int: """ - Check if there is a mounted parent directory created for this node. + Retrieve a new interface index. - :param path: existing parent path to use - :return: exist parent path if exists, None otherwise + :return: new interface index """ - logger.debug("looking for existing parent: %s", path) - existing_path = None - for parent in path.parents: - node_path = self.host_path(parent, is_dir=True) - if node_path == self.directory: - break - if self.path_exists(str(node_path)): - relative_path = path.relative_to(parent) - existing_path = node_path / relative_path - break - return existing_path + with self.lock: + return super().next_iface_id() - def create_file(self, file_path: Path, contents: str, mode: int = 0o644) -> None: + def newveth(self, iface_id: int = None, ifname: str = None) -> int: """ - Create file within a node at the given path, using contents and mode. + Create a new interface. - :param file_path: desired path for file - :param contents: contents of file - :param mode: mode to create file with + :param iface_id: id for the new interface + :param ifname: name for the new interface :return: nothing """ - logger.debug("node(%s) create file(%s) mode(%o)", self.name, file_path, mode) - host_path = self._find_parent_path(file_path) - if host_path: - self.host_cmd(f"mkdir -p {host_path.parent}") - else: - host_path = self.host_path(file_path) - directory = host_path.parent - if self.server is None: - if not directory.exists(): - directory.mkdir(parents=True, mode=0o755) - with host_path.open("w") as f: - f.write(contents) - host_path.chmod(mode) - else: - self.host_cmd(f"mkdir -m {0o755:o} -p {directory}") - self.server.remote_put_temp(host_path, contents) - self.host_cmd(f"chmod {mode:o} {host_path}") + with self.lock: + if iface_id is None: + iface_id = self.next_iface_id() - def copy_file(self, src_path: Path, dst_path: Path, mode: int = None) -> None: + if ifname is None: + ifname = f"eth{iface_id}" + + sessionid = self.session.short_session_id() + + try: + suffix = f"{self.id:x}.{iface_id}.{sessionid}" + except TypeError: + suffix = f"{self.id}.{iface_id}.{sessionid}" + + localname = f"veth{suffix}" + if len(localname) >= 16: + raise ValueError(f"interface local name ({localname}) too long") + + name = localname + "p" + if len(name) >= 16: + raise ValueError(f"interface name ({name}) too long") + + veth = Veth( + self.session, self, name, localname, start=self.up, server=self.server + ) + + if self.up: + self.net_client.device_ns(veth.name, str(self.pid)) + self.node_net_client.device_name(veth.name, ifname) + self.node_net_client.checksums_off(ifname) + + veth.name = ifname + + if self.up: + flow_id = self.node_net_client.get_ifindex(veth.name) + veth.flow_id = int(flow_id) + logging.debug("interface flow index: %s - %s", veth.name, veth.flow_id) + mac = self.node_net_client.get_mac(veth.name) + logging.debug("interface mac: %s - %s", veth.name, mac) + veth.set_mac(mac) + + try: + # add network interface to the node. If unsuccessful, destroy the + # network interface and raise exception. + self.add_iface(veth, iface_id) + except ValueError as e: + veth.shutdown() + del veth + raise e + + return iface_id + + def newtuntap(self, iface_id: int = None, ifname: str = None) -> int: """ - Copy source file to node host destination, updating the file mode when - provided. + Create a new tunnel tap. - :param src_path: source file to copy - :param dst_path: node host destination - :param mode: file mode + :param iface_id: interface id + :param ifname: interface name + :return: interface index + """ + with self.lock: + if iface_id is None: + iface_id = self.next_iface_id() + + if ifname is None: + ifname = f"eth{iface_id}" + + sessionid = self.session.short_session_id() + localname = f"tap{self.id}.{iface_id}.{sessionid}" + name = ifname + tuntap = TunTap(self.session, self, name, localname, start=self.up) + + try: + self.add_iface(tuntap, iface_id) + except ValueError as e: + tuntap.shutdown() + del tuntap + raise e + + return iface_id + + def set_mac(self, iface_id: int, mac: str) -> None: + """ + Set hardware address for an interface. + + :param iface_id: id of interface to set hardware address for + :param mac: mac address to set :return: nothing + :raises CoreCommandError: when a non-zero exit status occurs """ - logger.debug( - "node(%s) copying file src(%s) to dst(%s) mode(%o)", - self.name, - src_path, - dst_path, - mode or 0, - ) - host_path = self._find_parent_path(dst_path) - if host_path: - self.host_cmd(f"mkdir -p {host_path.parent}") - else: - host_path = self.host_path(dst_path) - if self.server is None: - shutil.copy2(src_path, host_path) - else: - self.server.remote_put(src_path, host_path) - if mode is not None: - self.host_cmd(f"chmod {mode:o} {host_path}") - - def adopt_iface(self, iface: CoreInterface, name: str) -> None: - """ - Adopt interface to the network namespace of the node and setting - the proper name provided. - - :param iface: interface to adopt - :param name: proper name for interface - :return: nothing - """ - # TODO: container, checksums off (container only?) - # TODO: container, get flow id (container only?) - # validate iface belongs to node and get id - iface_id = self.get_iface_id(iface) - if iface_id == -1: - raise CoreError(f"adopting unknown iface({iface.name})") - # add iface to container namespace - self.net_client.device_ns(iface.name, str(self.pid)) - # use default iface name for container, if a unique name was not provided - if iface.name == name: - name = f"eth{iface_id}" - self.node_net_client.device_name(iface.name, name) - iface.name = name - # turn checksums off - self.node_net_client.checksums_off(iface.name) - # retrieve flow id for container - iface.flow_id = self.node_net_client.get_ifindex(iface.name) - logger.debug("interface flow index: %s - %s", iface.name, iface.flow_id) - # set mac address - if iface.mac: + iface = self.get_iface(iface_id) + iface.set_mac(mac) + if self.up: self.node_net_client.device_mac(iface.name, str(iface.mac)) - logger.debug("interface mac: %s - %s", iface.name, iface.mac) - # set all addresses - for ip in iface.ips(): + + def add_ip(self, iface_id: int, ip: str) -> None: + """ + Add an ip address to an interface in the format "10.0.0.1/24". + + :param iface_id: id of interface to add address to + :param ip: address to add to interface + :return: nothing + :raises CoreError: when ip address provided is invalid + :raises CoreCommandError: when a non-zero exit status occurs + """ + iface = self.get_iface(iface_id) + iface.add_ip(ip) + if self.up: # ipv4 check broadcast = None - if netaddr.valid_ipv4(str(ip.ip)): + if netaddr.valid_ipv4(ip): broadcast = "+" - self.node_net_client.create_address(iface.name, str(ip), broadcast) - # configure iface options - iface.set_config() - # set iface up - self.node_net_client.device_up(iface.name) + self.node_net_client.create_address(iface.name, ip, broadcast) + + def remove_ip(self, iface_id: int, ip: str) -> None: + """ + Remove an ip address from an interface in the format "10.0.0.1/24". + + :param iface_id: id of interface to delete address from + :param ip: ip address to remove from interface + :return: nothing + :raises CoreError: when ip address provided is invalid + :raises CoreCommandError: when a non-zero exit status occurs + """ + iface = self.get_iface(iface_id) + iface.remove_ip(ip) + if self.up: + self.node_net_client.delete_address(iface.name, ip) + + def ifup(self, iface_id: int) -> None: + """ + Bring an interface up. + + :param iface_id: index of interface to bring up + :return: nothing + """ + if self.up: + iface = self.get_iface(iface_id) + self.node_net_client.device_up(iface.name) + + def new_iface( + self, net: "CoreNetworkBase", iface_data: InterfaceData + ) -> CoreInterface: + """ + Create a new network interface. + + :param net: network to associate with + :param iface_data: interface data for new interface + :return: interface index + """ + with self.lock: + if net.has_custom_iface: + return net.custom_iface(self, iface_data) + else: + iface_id = iface_data.id + if iface_id is not None and iface_id in self.ifaces: + raise CoreError( + f"node({self.name}) already has interface({iface_id})" + ) + iface_id = self.newveth(iface_id, iface_data.name) + self.attachnet(iface_id, net) + if iface_data.mac: + self.set_mac(iface_id, iface_data.mac) + for ip in iface_data.get_ips(): + self.add_ip(iface_id, ip) + self.ifup(iface_id) + return self.get_iface(iface_id) + + def addfile(self, srcname: str, filename: str) -> None: + """ + Add a file. + + :param srcname: source file name + :param filename: file name to add + :return: nothing + :raises CoreCommandError: when a non-zero exit status occurs + """ + logging.info("adding file from %s to %s", srcname, filename) + directory = os.path.dirname(filename) + if self.server is None: + self.client.check_cmd(f"mkdir -p {directory}") + self.client.check_cmd(f"mv {srcname} {filename}") + self.client.check_cmd("sync") + else: + self.host_cmd(f"mkdir -p {directory}") + self.server.remote_put(srcname, filename) + + def hostfilename(self, filename: str) -> str: + """ + Return the name of a node"s file on the host filesystem. + + :param filename: host file name + :return: path to file + """ + dirname, basename = os.path.split(filename) + if not basename: + raise ValueError(f"no basename for filename: {filename}") + if dirname and dirname[0] == "/": + dirname = dirname[1:] + dirname = dirname.replace("/", ".") + dirname = os.path.join(self.nodedir, dirname) + return os.path.join(dirname, basename) + + def nodefile(self, filename: str, contents: str, mode: int = 0o644) -> None: + """ + Create a node file with a given mode. + + :param filename: name of file to create + :param contents: contents of file + :param mode: mode for file + :return: nothing + """ + hostfilename = self.hostfilename(filename) + dirname, _basename = os.path.split(hostfilename) + if self.server is None: + if not os.path.isdir(dirname): + os.makedirs(dirname, mode=0o755) + with open(hostfilename, "w") as open_file: + open_file.write(contents) + os.chmod(open_file.name, mode) + else: + self.host_cmd(f"mkdir -m {0o755:o} -p {dirname}") + self.server.remote_put_temp(hostfilename, contents) + self.host_cmd(f"chmod {mode:o} {hostfilename}") + logging.debug( + "node(%s) added file: %s; mode: 0%o", self.name, hostfilename, mode + ) + + def nodefilecopy(self, filename: str, srcfilename: str, mode: int = None) -> None: + """ + Copy a file to a node, following symlinks and preserving metadata. + Change file mode if specified. + + :param filename: file name to copy file to + :param srcfilename: file to copy + :param mode: mode to copy to + :return: nothing + """ + hostfilename = self.hostfilename(filename) + if self.server is None: + shutil.copy2(srcfilename, hostfilename) + else: + self.server.remote_put(srcfilename, hostfilename) + if mode is not None: + self.host_cmd(f"chmod {mode:o} {hostfilename}") + logging.info( + "node(%s) copied file: %s; mode: %s", self.name, hostfilename, mode + ) class CoreNetworkBase(NodeBase): @@ -915,30 +938,86 @@ class CoreNetworkBase(NodeBase): Base class for networks """ + linktype: LinkTypes = LinkTypes.WIRED + has_custom_iface: bool = False + def __init__( self, session: "Session", _id: int, name: str, server: "DistributedServer" = None, - options: NodeOptions = None, ) -> None: """ Create a CoreNetworkBase instance. - :param session: session object + :param session: CORE session object :param _id: object id :param name: object name :param server: remote server node will run on, default is None for localhost - :param options: options to create node with """ - super().__init__(session, _id, name, server, options) - mtu = self.session.options.get_int("mtu") - self.mtu: int = mtu if mtu > 0 else DEFAULT_MTU - self.brname: Optional[str] = None - self.linked: dict[CoreInterface, dict[CoreInterface, bool]] = {} - self.linked_lock: threading.Lock = threading.Lock() + super().__init__(session, _id, name, server) + self.brname = None + self._linked = {} + self._linked_lock = threading.Lock() + + @abc.abstractmethod + def startup(self) -> None: + """ + Each object implements its own startup method. + + :return: nothing + """ + raise NotImplementedError + + @abc.abstractmethod + def shutdown(self) -> None: + """ + Each object implements its own shutdown method. + + :return: nothing + """ + raise NotImplementedError + + @abc.abstractmethod + def linknet(self, net: "CoreNetworkBase") -> CoreInterface: + """ + Link network to another. + + :param net: network to link with + :return: created interface + """ + raise NotImplementedError + + @abc.abstractmethod + def linkconfig( + self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None + ) -> None: + """ + Configure link parameters by applying tc queuing disciplines on the interface. + + :param iface: interface one + :param options: options for configuring link + :param iface2: interface two + :return: nothing + """ + raise NotImplementedError + + def custom_iface(self, node: CoreNode, iface_data: InterfaceData) -> CoreInterface: + raise NotImplementedError + + def get_linked_iface(self, net: "CoreNetworkBase") -> Optional[CoreInterface]: + """ + Return the interface that links this net with another net. + + :param net: interface to get link for + :return: interface the provided network is linked to + """ + for iface in self.get_ifaces(): + if iface.othernet == net: + return iface + return None def attach(self, iface: CoreInterface) -> None: """ @@ -947,12 +1026,11 @@ class CoreNetworkBase(NodeBase): :param iface: network interface to attach :return: nothing """ - iface_id = self.next_iface_id() - self.ifaces[iface_id] = iface - iface.net = self - iface.net_id = iface_id - with self.linked_lock: - self.linked[iface] = {} + i = self.next_iface_id() + self.ifaces[i] = iface + iface.net_id = i + with self._linked_lock: + self._linked[iface] = {} def detach(self, iface: CoreInterface) -> None: """ @@ -962,7 +1040,143 @@ class CoreNetworkBase(NodeBase): :return: nothing """ del self.ifaces[iface.net_id] - iface.net = None iface.net_id = None - with self.linked_lock: - del self.linked[iface] + with self._linked_lock: + del self._linked[iface] + + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: + """ + Build link data objects for this network. Each link object describes a link + between this network and a node. + + :param flags: message type + :return: list of link data + """ + all_links = [] + + # build a link message from this network node to each node having a + # connected interface + for iface in self.get_ifaces(): + uni = False + linked_node = iface.node + if linked_node is None: + # two layer-2 switches/hubs linked together via linknet() + if not iface.othernet: + continue + linked_node = iface.othernet + if linked_node.id == self.id: + continue + iface.swapparams("_params_up") + upstream_params = iface.getparams() + iface.swapparams("_params_up") + if iface.getparams() != upstream_params: + uni = True + + unidirectional = 0 + if uni: + unidirectional = 1 + + mac = str(iface.mac) if iface.mac else None + iface2_data = InterfaceData( + id=linked_node.get_iface_id(iface), name=iface.name, mac=mac + ) + ip4 = iface.get_ip4() + if ip4: + iface2_data.ip4 = str(ip4.ip) + iface2_data.ip4_mask = ip4.prefixlen + ip6 = iface.get_ip6() + if ip6: + iface2_data.ip6 = str(ip6.ip) + iface2_data.ip6_mask = ip6.prefixlen + + options_data = iface.get_link_options(unidirectional) + link_data = LinkData( + message_type=flags, + type=self.linktype, + node1_id=self.id, + node2_id=linked_node.id, + iface2=iface2_data, + options=options_data, + ) + all_links.append(link_data) + + if not uni: + continue + iface.swapparams("_params_up") + options_data = iface.get_link_options(unidirectional) + link_data = LinkData( + message_type=MessageFlags.NONE, + type=self.linktype, + node1_id=linked_node.id, + node2_id=self.id, + options=options_data, + ) + iface.swapparams("_params_up") + all_links.append(link_data) + return all_links + + +class Position: + """ + Helper class for Cartesian coordinate position + """ + + def __init__(self, x: float = None, y: float = None, z: float = None) -> None: + """ + Creates a Position instance. + + :param x: x position + :param y: y position + :param z: z position + """ + self.x: float = x + self.y: float = y + self.z: float = z + self.lon: Optional[float] = None + self.lat: Optional[float] = None + self.alt: Optional[float] = None + + def set(self, x: float = None, y: float = None, z: float = None) -> bool: + """ + Returns True if the position has actually changed. + + :param x: x position + :param y: y position + :param z: z position + :return: True if position changed, False otherwise + """ + if self.x == x and self.y == y and self.z == z: + return False + self.x = x + self.y = y + self.z = z + return True + + def get(self) -> Tuple[float, float, float]: + """ + Retrieve x,y,z position. + + :return: x,y,z position tuple + """ + return self.x, self.y, self.z + + def set_geo(self, lon: float, lat: float, alt: float) -> None: + """ + Set geo position lon, lat, alt. + + :param lon: longitude value + :param lat: latitude value + :param alt: altitude value + :return: nothing + """ + self.lon = lon + self.lat = lat + self.alt = alt + + def get_geo(self) -> Tuple[float, float, float]: + """ + Retrieve current geo position lon, lat, alt. + + :return: lon, lat, alt position tuple + """ + return self.lon, self.lat, self.alt diff --git a/daemon/core/nodes/client.py b/daemon/core/nodes/client.py new file mode 100644 index 00000000..710724b1 --- /dev/null +++ b/daemon/core/nodes/client.py @@ -0,0 +1,69 @@ +""" +client.py: implementation of the VnodeClient class for issuing commands +over a control channel to the vnoded process running in a network namespace. +The control channel can be accessed via calls using the vcmd shell. +""" + +from core import utils +from core.executables import BASH, VCMD + + +class VnodeClient: + """ + Provides client functionality for interacting with a virtual node. + """ + + def __init__(self, name: str, ctrlchnlname: str) -> None: + """ + Create a VnodeClient instance. + + :param name: name for client + :param ctrlchnlname: control channel name + """ + self.name: str = name + self.ctrlchnlname: str = ctrlchnlname + + def _verify_connection(self) -> None: + """ + Checks that the vcmd client is properly connected. + + :return: nothing + :raises IOError: when not connected + """ + if not self.connected(): + raise IOError("vcmd not connected") + + def connected(self) -> bool: + """ + Check if node is connected or not. + + :return: True if connected, False otherwise + """ + return True + + def close(self) -> None: + """ + Close the client connection. + + :return: nothing + """ + pass + + def create_cmd(self, args: str, shell: bool = False) -> str: + if shell: + args = f'{BASH} -c "{args}"' + return f"{VCMD} -c {self.ctrlchnlname} -- {args}" + + def check_cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: + """ + Run command and return exit status and combined stdout and stderr. + + :param args: command to run + :param wait: True to wait for command status, False otherwise + :param shell: True to use shell, False otherwise + :return: combined stdout and stderr + :raises core.CoreCommandError: when there is a non-zero exit status + """ + self._verify_connection() + args = self.create_cmd(args, shell) + return utils.cmd(args, wait=wait, shell=shell) diff --git a/daemon/core/nodes/docker.py b/daemon/core/nodes/docker.py index ad05c407..ce34bd98 100644 --- a/daemon/core/nodes/docker.py +++ b/daemon/core/nodes/docker.py @@ -1,133 +1,110 @@ import json import logging -import shlex -from dataclasses import dataclass, field -from pathlib import Path +import os from tempfile import NamedTemporaryFile -from typing import TYPE_CHECKING +from typing import TYPE_CHECKING, Callable, Dict, Optional from core import utils from core.emulator.distributed import DistributedServer -from core.errors import CoreCommandError, CoreError -from core.executables import BASH -from core.nodes.base import CoreNode, CoreNodeOptions - -logger = logging.getLogger(__name__) +from core.emulator.enumerations import NodeTypes +from core.errors import CoreCommandError +from core.nodes.base import CoreNode +from core.nodes.netclient import LinuxNetClient, get_net_client if TYPE_CHECKING: from core.emulator.session import Session -DOCKER: str = "docker" +class DockerClient: + def __init__(self, name: str, image: str, run: Callable[..., str]) -> None: + self.name: str = name + self.image: str = image + self.run: Callable[..., str] = run + self.pid: Optional[str] = None -@dataclass -class DockerOptions(CoreNodeOptions): - image: str = "ubuntu" - """image used when creating container""" - binds: list[tuple[str, str]] = field(default_factory=list) - """bind mount source and destinations to setup within container""" - volumes: list[tuple[str, str, bool, bool]] = field(default_factory=list) - """ - volume mount source, destination, unique, delete to setup within container + def create_container(self) -> str: + self.run( + f"docker run -td --init --net=none --hostname {self.name} " + f"--name {self.name} --sysctl net.ipv6.conf.all.disable_ipv6=0 " + f"--privileged {self.image} /bin/bash" + ) + self.pid = self.get_pid() + return self.pid - unique is True for node unique volume naming - delete is True for deleting volume mount during shutdown - """ + def get_info(self) -> Dict: + args = f"docker inspect {self.name}" + output = self.run(args) + data = json.loads(output) + if not data: + raise CoreCommandError(1, args, f"docker({self.name}) not present") + return data[0] + def is_alive(self) -> bool: + try: + data = self.get_info() + return data["State"]["Running"] + except CoreCommandError: + return False -@dataclass -class DockerVolume: - src: str - """volume mount name""" - dst: str - """volume mount destination directory""" - unique: bool = True - """True to create a node unique prefixed name for this volume""" - delete: bool = True - """True to delete the volume during shutdown""" - path: str = None - """path to the volume on the host""" + def stop_container(self) -> None: + self.run(f"docker rm -f {self.name}") + + def check_cmd(self, cmd: str, wait: bool = True, shell: bool = False) -> str: + logging.info("docker cmd output: %s", cmd) + return utils.cmd(f"docker exec {self.name} {cmd}", wait=wait, shell=shell) + + def create_ns_cmd(self, cmd: str) -> str: + return f"nsenter -t {self.pid} -a {cmd}" + + def get_pid(self) -> str: + args = f"docker inspect -f '{{{{.State.Pid}}}}' {self.name}" + output = self.run(args) + self.pid = output + logging.debug("node(%s) pid: %s", self.name, self.pid) + return output + + def copy_file(self, source: str, destination: str) -> str: + args = f"docker cp {source} {self.name}:{destination}" + return self.run(args) class DockerNode(CoreNode): - """ - Provides logic for creating a Docker based node. - """ + apitype = NodeTypes.DOCKER def __init__( self, session: "Session", _id: int = None, name: str = None, + nodedir: str = None, server: DistributedServer = None, - options: DockerOptions = None, + image: str = None, ) -> None: """ Create a DockerNode instance. :param session: core session instance - :param _id: node id - :param name: node name + :param _id: object id + :param name: object name + :param nodedir: node directory :param server: remote server node will run on, default is None for localhost - :param options: options for creating node + :param image: image to start container with """ - options = options or DockerOptions() - super().__init__(session, _id, name, server, options) - self.image: str = options.image - self.binds: list[tuple[str, str]] = options.binds - self.volumes: dict[str, DockerVolume] = {} - self.env: dict[str, str] = {} - for src, dst, unique, delete in options.volumes: - src_name = self._unique_name(src) if unique else src - self.volumes[src] = DockerVolume(src_name, dst, unique, delete) + if image is None: + image = "ubuntu" + self.image: str = image + super().__init__(session, _id, name, nodedir, server) - @classmethod - def create_options(cls) -> DockerOptions: + def create_node_net_client(self, use_ovs: bool) -> LinuxNetClient: """ - Return default creation options, which can be used during node creation. + Create node network client for running network commands within the nodes + container. - :return: docker options + :param use_ovs: True for OVS bridges, False for Linux bridges + :return:node network client """ - return DockerOptions() - - def create_cmd(self, args: str, shell: bool = False) -> str: - """ - Create command used to run commands within the context of a node. - - :param args: command arguments - :param shell: True to run shell like, False otherwise - :return: node command - """ - if shell: - args = f"{BASH} -c {shlex.quote(args)}" - return f"nsenter -t {self.pid} -m -u -i -p -n -- {args}" - - def cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: - """ - Runs a command that is used to configure and setup the network within a - node. - - :param args: command to run - :param wait: True to wait for status, False otherwise - :param shell: True to use shell, False otherwise - :return: combined stdout and stderr - :raises CoreCommandError: when a non-zero exit status occurs - """ - args = self.create_cmd(args, shell) - if self.server is None: - return utils.cmd(args, wait=wait, shell=shell, env=self.env) - else: - return self.server.remote_cmd(args, wait=wait, env=self.env) - - def _unique_name(self, name: str) -> str: - """ - Creates a session/node unique prefixed name for the provided input. - - :param name: name to make unique - :return: unique session/node prefixed name - """ - return f"{self.session.id}.{self.id}.{name}" + return get_net_client(use_ovs, self.nsenter_cmd) def alive(self) -> bool: """ @@ -135,64 +112,22 @@ class DockerNode(CoreNode): :return: True if node is alive, False otherwise """ - try: - running = self.host_cmd( - f"{DOCKER} inspect -f '{{{{.State.Running}}}}' {self.name}" - ) - return json.loads(running) - except CoreCommandError: - return False + return self.client.is_alive() def startup(self) -> None: """ - Create a docker container instance for the specified image. + Start a new namespace node by invoking the vnoded process that + allocates a new namespace. Bring up the loopback device and set + the hostname. :return: nothing """ with self.lock: if self.up: - raise CoreError(f"starting node({self.name}) that is already up") - # create node directory + raise ValueError("starting a node that is already up") self.makenodedir() - # setup commands for creating bind/volume mounts - binds = "" - for src, dst in self.binds: - binds += f"--mount type=bind,source={src},target={dst} " - volumes = "" - for volume in self.volumes.values(): - volumes += ( - f"--mount type=volume," f"source={volume.src},target={volume.dst} " - ) - # normalize hostname - hostname = self.name.replace("_", "-") - # create container and retrieve the created containers PID - self.host_cmd( - f"{DOCKER} run -td --init --net=none --hostname {hostname} " - f"--name {self.name} --sysctl net.ipv6.conf.all.disable_ipv6=0 " - f"{binds} {volumes} " - f"--privileged {self.image} tail -f /dev/null" - ) - # retrieve pid and process environment for use in nsenter commands - self.pid = self.host_cmd( - f"{DOCKER} inspect -f '{{{{.State.Pid}}}}' {self.name}" - ) - output = self.host_cmd(f"cat /proc/{self.pid}/environ") - for line in output.split("\x00"): - if not line: - continue - key, value = line.split("=") - self.env[key] = value - # setup symlinks for bind and volume mounts within - for src, dst in self.binds: - link_path = self.host_path(Path(dst), True) - self.host_cmd(f"ln -s {src} {link_path}") - for volume in self.volumes.values(): - volume.path = self.host_cmd( - f"{DOCKER} volume inspect -f '{{{{.Mountpoint}}}}' {volume.src}" - ) - link_path = self.host_path(Path(volume.dst), True) - self.host_cmd(f"ln -s {volume.path} {link_path}") - logger.debug("node(%s) pid: %s", self.name, self.pid) + self.client = DockerClient(self.name, self.image, self.host_cmd) + self.pid = self.client.create_container() self.up = True def shutdown(self) -> None: @@ -204,14 +139,20 @@ class DockerNode(CoreNode): # nothing to do if node is not up if not self.up: return + with self.lock: self.ifaces.clear() - self.host_cmd(f"{DOCKER} rm -f {self.name}") - for volume in self.volumes.values(): - if volume.delete: - self.host_cmd(f"{DOCKER} volume rm {volume.src}") + self.client.stop_container() self.up = False + def nsenter_cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: + if self.server is None: + args = self.client.create_ns_cmd(args) + return utils.cmd(args, wait=wait, shell=shell) + else: + args = self.client.create_ns_cmd(args) + return self.server.remote_cmd(args, wait=wait) + def termcmdstring(self, sh: str = "/bin/sh") -> str: """ Create a terminal command string. @@ -219,78 +160,79 @@ class DockerNode(CoreNode): :param sh: shell to execute command in :return: str """ - terminal = f"{DOCKER} exec -it {self.name} {sh}" - if self.server is None: - return terminal - else: - return f"ssh -X -f {self.server.host} xterm -e {terminal}" + return f"docker exec -it {self.name} bash" - def create_dir(self, dir_path: Path) -> None: + def privatedir(self, path: str) -> None: """ Create a private directory. - :param dir_path: path to create + :param path: path to create :return: nothing """ - logger.debug("creating node dir: %s", dir_path) - self.cmd(f"mkdir -p {dir_path}") + logging.debug("creating node dir: %s", path) + args = f"mkdir -p {path}" + self.cmd(args) - def mount(self, src_path: str, target_path: str) -> None: + def mount(self, source: str, target: str) -> None: """ Create and mount a directory. - :param src_path: source directory to mount - :param target_path: target directory to create + :param source: source directory to mount + :param target: target directory to create :return: nothing :raises CoreCommandError: when a non-zero exit status occurs """ - logger.debug("mounting source(%s) target(%s)", src_path, target_path) + logging.debug("mounting source(%s) target(%s)", source, target) raise Exception("not supported") - def create_file(self, file_path: Path, contents: str, mode: int = 0o644) -> None: + def nodefile(self, filename: str, contents: str, mode: int = 0o644) -> None: """ Create a node file with a given mode. - :param file_path: name of file to create + :param filename: name of file to create :param contents: contents of file :param mode: mode for file :return: nothing """ - logger.debug("node(%s) create file(%s) mode(%o)", self.name, file_path, mode) + logging.debug("nodefile filename(%s) mode(%s)", filename, mode) + directory = os.path.dirname(filename) temp = NamedTemporaryFile(delete=False) - temp.write(contents.encode()) + temp.write(contents.encode("utf-8")) temp.close() - temp_path = Path(temp.name) - directory = file_path.parent - if str(directory) != ".": + + if directory: self.cmd(f"mkdir -m {0o755:o} -p {directory}") if self.server is not None: - self.server.remote_put(temp_path, temp_path) - self.host_cmd(f"{DOCKER} cp {temp_path} {self.name}:{file_path}") - self.cmd(f"chmod {mode:o} {file_path}") + self.server.remote_put(temp.name, temp.name) + self.client.copy_file(temp.name, filename) + self.cmd(f"chmod {mode:o} {filename}") if self.server is not None: - self.host_cmd(f"rm -f {temp_path}") - temp_path.unlink() + self.host_cmd(f"rm -f {temp.name}") + os.unlink(temp.name) + logging.debug("node(%s) added file: %s; mode: 0%o", self.name, filename, mode) - def copy_file(self, src_path: Path, dst_path: Path, mode: int = None) -> None: + def nodefilecopy(self, filename: str, srcfilename: str, mode: int = None) -> None: """ Copy a file to a node, following symlinks and preserving metadata. Change file mode if specified. - :param dst_path: file name to copy file to - :param src_path: file to copy + :param filename: file name to copy file to + :param srcfilename: file to copy :param mode: mode to copy to :return: nothing """ - logger.info( - "node file copy file(%s) source(%s) mode(%o)", dst_path, src_path, mode or 0 + logging.info( + "node file copy file(%s) source(%s) mode(%s)", filename, srcfilename, mode ) - self.cmd(f"mkdir -p {dst_path.parent}") - if self.server: + directory = os.path.dirname(filename) + self.cmd(f"mkdir -p {directory}") + + if self.server is None: + source = srcfilename + else: temp = NamedTemporaryFile(delete=False) - temp_path = Path(temp.name) - src_path = temp_path - self.server.remote_put(src_path, temp_path) - self.host_cmd(f"{DOCKER} cp {src_path} {self.name}:{dst_path}") - if mode is not None: - self.cmd(f"chmod {mode:o} {dst_path}") + source = temp.name + self.server.remote_put(source, temp.name) + + self.client.copy_file(source, filename) + self.cmd(f"chmod {mode:o} {filename}") diff --git a/daemon/core/nodes/interface.py b/daemon/core/nodes/interface.py index 294e85f9..99f4fb8d 100644 --- a/daemon/core/nodes/interface.py +++ b/daemon/core/nodes/interface.py @@ -3,72 +3,23 @@ virtual ethernet classes that implement the interfaces available under Linux. """ import logging -import math -from pathlib import Path -from typing import TYPE_CHECKING, Callable, Optional +import time +from typing import TYPE_CHECKING, Callable, Dict, List, Optional, Tuple import netaddr from core import utils -from core.emulator.data import InterfaceData, LinkOptions +from core.emulator.data import LinkOptions from core.emulator.enumerations import TransportType from core.errors import CoreCommandError, CoreError -from core.executables import TC from core.nodes.netclient import LinuxNetClient, get_net_client -logger = logging.getLogger(__name__) - if TYPE_CHECKING: - from core.emulator.session import Session from core.emulator.distributed import DistributedServer - from core.nodes.base import CoreNetworkBase, CoreNode, NodeBase + from core.emulator.session import Session + from core.nodes.base import CoreNetworkBase, CoreNode DEFAULT_MTU: int = 1500 -IFACE_NAME_LENGTH: int = 15 - - -def tc_clear_cmd(name: str) -> str: - """ - Create tc command to clear device configuration. - - :param name: name of device to clear - :return: tc command - """ - return f"{TC} qdisc delete dev {name} root handle 10:" - - -def tc_cmd(name: str, options: LinkOptions, mtu: int) -> str: - """ - Create tc command to configure a device with given name and options. - - :param name: name of device to configure - :param options: options to configure with - :param mtu: mtu for configuration - :return: tc command - """ - netem = "" - if options.bandwidth is not None: - limit = 1000 - bw = options.bandwidth / 1000 - if options.buffer is not None and options.buffer > 0: - limit = options.buffer - elif options.delay and options.bandwidth: - delay = options.delay / 1000 - limit = max(2, math.ceil((2 * bw * delay) / (8 * mtu))) - netem += f" rate {bw}kbit" - netem += f" limit {limit}" - if options.delay is not None: - netem += f" delay {options.delay}us" - if options.jitter is not None: - if options.delay is None: - netem += f" delay 0us {options.jitter}us 25%" - else: - netem += f" {options.jitter}us 25%" - if options.loss is not None and options.loss > 0: - netem += f" loss {min(options.loss, 100)}%" - if options.dup is not None and options.dup > 0: - netem += f" duplicate {min(options.dup, 100)}%" - return f"{TC} qdisc replace dev {name} root handle 10: netem {netem}" class CoreInterface: @@ -78,63 +29,56 @@ class CoreInterface: def __init__( self, - _id: int, + session: "Session", + node: "CoreNode", name: str, localname: str, - use_ovs: bool, - mtu: int = DEFAULT_MTU, - node: "NodeBase" = None, + mtu: int, server: "DistributedServer" = None, ) -> None: """ Creates a CoreInterface instance. - :param _id: interface id for associated node + :param session: core session instance + :param node: node for interface :param name: interface name :param localname: interface local name - :param use_ovs: True to use ovs, False otherwise :param mtu: mtu value - :param node: node associated with this interface - :param server: remote server node will run on, default is None for localhost + :param server: remote server node + will run on, default is None for localhost """ - if len(name) >= IFACE_NAME_LENGTH: - raise CoreError( - f"interface name ({name}) too long, max {IFACE_NAME_LENGTH}" - ) - if len(localname) >= IFACE_NAME_LENGTH: - raise CoreError( - f"interface local name ({localname}) too long, max {IFACE_NAME_LENGTH}" - ) - self.id: int = _id - self.node: Optional["NodeBase"] = node - # id of interface for network, used by wlan/emane - self.net_id: Optional[int] = None + self.session: "Session" = session + self.node: "CoreNode" = node self.name: str = name self.localname: str = localname self.up: bool = False self.mtu: int = mtu self.net: Optional[CoreNetworkBase] = None - self.ip4s: list[netaddr.IPNetwork] = [] - self.ip6s: list[netaddr.IPNetwork] = [] + self.othernet: Optional[CoreNetworkBase] = None + self._params: Dict[str, float] = {} + self.ip4s: List[netaddr.IPNetwork] = [] + self.ip6s: List[netaddr.IPNetwork] = [] self.mac: Optional[netaddr.EUI] = None # placeholder position hook self.poshook: Callable[[CoreInterface], None] = lambda x: None # used with EMANE self.transport_type: TransportType = TransportType.VIRTUAL + # id of interface for node + self.node_id: Optional[int] = None + # id of interface for network + self.net_id: Optional[int] = None # id used to find flow data self.flow_id: Optional[int] = None self.server: Optional["DistributedServer"] = server - self.net_client: LinuxNetClient = get_net_client(use_ovs, self.host_cmd) - self.control: bool = False - # configuration data - self.has_netem: bool = False - self.options: LinkOptions = LinkOptions() + self.net_client: LinuxNetClient = get_net_client( + self.session.use_ovs(), self.host_cmd + ) def host_cmd( self, args: str, - env: dict[str, str] = None, - cwd: Path = None, + env: Dict[str, str] = None, + cwd: str = None, wait: bool = True, shell: bool = False, ) -> str: @@ -160,13 +104,7 @@ class CoreInterface: :return: nothing """ - self.net_client.create_veth(self.localname, self.name) - if self.mtu > 0: - self.net_client.set_mtu(self.name, self.mtu) - self.net_client.set_mtu(self.localname, self.mtu) - self.net_client.device_up(self.name) - self.net_client.device_up(self.localname) - self.up = True + pass def shutdown(self) -> None: """ @@ -174,14 +112,30 @@ class CoreInterface: :return: nothing """ - if not self.up: - return - if self.localname: - try: - self.net_client.delete_device(self.localname) - except CoreCommandError: - pass - self.up = False + pass + + def attachnet(self, net: "CoreNetworkBase") -> None: + """ + Attach network. + + :param net: network to attach + :return: nothing + """ + if self.net: + self.detachnet() + self.net = None + + net.attach(self) + self.net = net + + def detachnet(self) -> None: + """ + Detach from a network. + + :return: nothing + """ + if self.net is not None: + self.net.detach(self) def add_ip(self, ip: str) -> None: """ @@ -235,7 +189,7 @@ class CoreInterface: """ return next(iter(self.ip6s), None) - def ips(self) -> list[netaddr.IPNetwork]: + def ips(self) -> List[netaddr.IPNetwork]: """ Retrieve a list of all ip4 and ip6 addresses combined. @@ -259,6 +213,91 @@ class CoreInterface: except netaddr.AddrFormatError as e: raise CoreError(f"invalid mac address({mac}): {e}") + def getparam(self, key: str) -> float: + """ + Retrieve a parameter from the, or None if the parameter does not exist. + + :param key: parameter to get value for + :return: parameter value + """ + return self._params.get(key) + + def get_link_options(self, unidirectional: int) -> LinkOptions: + """ + Get currently set params as link options. + + :param unidirectional: unidirectional setting + :return: link options + """ + delay = self.getparam("delay") + if delay is not None: + delay = int(delay) + bandwidth = self.getparam("bw") + if bandwidth is not None: + bandwidth = int(bandwidth) + dup = self.getparam("duplicate") + if dup is not None: + dup = int(dup) + jitter = self.getparam("jitter") + if jitter is not None: + jitter = int(jitter) + buffer = self.getparam("buffer") + if buffer is not None: + buffer = int(buffer) + return LinkOptions( + delay=delay, + bandwidth=bandwidth, + dup=dup, + jitter=jitter, + loss=self.getparam("loss"), + buffer=buffer, + unidirectional=unidirectional, + ) + + def getparams(self) -> List[Tuple[str, float]]: + """ + Return (key, value) pairs for parameters. + """ + parameters = [] + for k in sorted(self._params.keys()): + parameters.append((k, self._params[k])) + return parameters + + def setparam(self, key: str, value: float) -> bool: + """ + Set a parameter value, returns True if the parameter has changed. + + :param key: parameter name to set + :param value: parameter value + :return: True if parameter changed, False otherwise + """ + # treat None and 0 as unchanged values + logging.debug("setting param: %s - %s", key, value) + if value is None or value < 0: + return False + + current_value = self._params.get(key) + if current_value is not None and current_value == value: + return False + + self._params[key] = value + return True + + def swapparams(self, name: str) -> None: + """ + Swap out parameters dict for name. If name does not exist, + intialize it. This is for supporting separate upstream/downstream + parameters when two layer-2 nodes are linked together. + + :param name: name of parameter to swap + :return: nothing + """ + tmp = self._params + if not hasattr(self, name): + setattr(self, name, {}) + self._params = getattr(self, name) + setattr(self, name, tmp) + def setposition(self) -> None: """ Dispatch position hook handler when possible. @@ -293,47 +332,240 @@ class CoreInterface: """ return self.transport_type == TransportType.VIRTUAL - def set_config(self) -> None: - # clear current settings - if self.options.is_clear(): - if self.has_netem: - cmd = tc_clear_cmd(self.name) - if self.node: - self.node.cmd(cmd) - else: - self.host_cmd(cmd) - self.has_netem = False - # set updated settings - else: - cmd = tc_cmd(self.name, self.options, self.mtu) - if self.node: - self.node.cmd(cmd) + +class Veth(CoreInterface): + """ + Provides virtual ethernet functionality for core nodes. + """ + + def __init__( + self, + session: "Session", + node: "CoreNode", + name: str, + localname: str, + mtu: int = DEFAULT_MTU, + server: "DistributedServer" = None, + start: bool = True, + ) -> None: + """ + Creates a VEth instance. + + :param session: core session instance + :param node: related core node + :param name: interface name + :param localname: interface local name + :param mtu: interface mtu + :param server: remote server node + will run on, default is None for localhost + :param start: start flag + :raises CoreCommandError: when there is a command exception + """ + # note that net arg is ignored + super().__init__(session, node, name, localname, mtu, server) + if start: + self.startup() + + def startup(self) -> None: + """ + Interface startup logic. + + :return: nothing + :raises CoreCommandError: when there is a command exception + """ + self.net_client.create_veth(self.localname, self.name) + self.net_client.device_up(self.localname) + self.up = True + + def shutdown(self) -> None: + """ + Interface shutdown logic. + + :return: nothing + """ + if not self.up: + return + if self.node: + try: + self.node.node_net_client.device_flush(self.name) + except CoreCommandError: + pass + if self.localname: + try: + self.net_client.delete_device(self.localname) + except CoreCommandError: + pass + self.up = False + + +class TunTap(CoreInterface): + """ + TUN/TAP virtual device in TAP mode + """ + + def __init__( + self, + session: "Session", + node: "CoreNode", + name: str, + localname: str, + mtu: int = DEFAULT_MTU, + server: "DistributedServer" = None, + start: bool = True, + ) -> None: + """ + Create a TunTap instance. + + :param session: core session instance + :param node: related core node + :param name: interface name + :param localname: local interface name + :param mtu: interface mtu + :param server: remote server node + will run on, default is None for localhost + :param start: start flag + """ + super().__init__(session, node, name, localname, mtu, server) + if start: + self.startup() + + def startup(self) -> None: + """ + Startup logic for a tunnel tap. + + :return: nothing + """ + # TODO: more sophisticated TAP creation here + # Debian does not support -p (tap) option, RedHat does. + # For now, this is disabled to allow the TAP to be created by another + # system (e.g. EMANE"s emanetransportd) + # check_call(["tunctl", "-t", self.name]) + # self.install() + self.up = True + + def shutdown(self) -> None: + """ + Shutdown functionality for a tunnel tap. + + :return: nothing + """ + if not self.up: + return + + try: + self.node.node_net_client.device_flush(self.name) + except CoreCommandError: + logging.exception("error shutting down tunnel tap") + + self.up = False + + def waitfor( + self, func: Callable[[], int], attempts: int = 10, maxretrydelay: float = 0.25 + ) -> bool: + """ + Wait for func() to return zero with exponential backoff. + + :param func: function to wait for a result of zero + :param attempts: number of attempts to wait for a zero result + :param maxretrydelay: maximum retry delay + :return: True if wait succeeded, False otherwise + """ + delay = 0.01 + result = False + for i in range(1, attempts + 1): + r = func() + if r == 0: + result = True + break + msg = f"attempt {i} failed with nonzero exit status {r}" + if i < attempts + 1: + msg += ", retrying..." + logging.info(msg) + time.sleep(delay) + delay += delay + if delay > maxretrydelay: + delay = maxretrydelay else: - self.host_cmd(cmd) - self.has_netem = True + msg += ", giving up" + logging.info(msg) - def get_data(self) -> InterfaceData: - """ - Retrieve the data representation of this interface. + return result - :return: interface data + def waitfordevicelocal(self) -> None: """ - ip4 = self.get_ip4() - ip4_addr = str(ip4.ip) if ip4 else None - ip4_mask = ip4.prefixlen if ip4 else None - ip6 = self.get_ip6() - ip6_addr = str(ip6.ip) if ip6 else None - ip6_mask = ip6.prefixlen if ip6 else None - mac = str(self.mac) if self.mac else None - return InterfaceData( - id=self.id, - name=self.name, - mac=mac, - ip4=ip4_addr, - ip4_mask=ip4_mask, - ip6=ip6_addr, - ip6_mask=ip6_mask, - ) + Check for presence of a local device - tap device may not + appear right away waits + + :return: wait for device local response + """ + logging.debug("waiting for device local: %s", self.localname) + + def localdevexists(): + try: + self.net_client.device_show(self.localname) + return 0 + except CoreCommandError: + return 1 + + self.waitfor(localdevexists) + + def waitfordevicenode(self) -> None: + """ + Check for presence of a node device - tap device may not appear right away waits. + + :return: nothing + """ + logging.debug("waiting for device node: %s", self.name) + + def nodedevexists(): + try: + self.node.node_net_client.device_show(self.name) + return 0 + except CoreCommandError: + return 1 + + count = 0 + while True: + result = self.waitfor(nodedevexists) + if result: + break + + # TODO: emane specific code + # check if this is an EMANE interface; if so, continue + # waiting if EMANE is still running + should_retry = count < 5 + is_emane = self.session.emane.is_emane_net(self.net) + is_emane_running = self.session.emane.emanerunning(self.node) + if all([should_retry, is_emane, is_emane_running]): + count += 1 + else: + raise RuntimeError("node device failed to exist") + + def install(self) -> None: + """ + Install this TAP into its namespace. This is not done from the + startup() method but called at a later time when a userspace + program (running on the host) has had a chance to open the socket + end of the TAP. + + :return: nothing + :raises CoreCommandError: when there is a command exception + """ + self.waitfordevicelocal() + netns = str(self.node.pid) + self.net_client.device_ns(self.localname, netns) + self.node.node_net_client.device_name(self.localname, self.name) + self.node.node_net_client.device_up(self.name) + + def set_ips(self) -> None: + """ + Set interface ip addresses. + + :return: nothing + """ + self.waitfordevicenode() + for ip in self.ips(): + self.node.node_net_client.create_address(self.name, str(ip)) class GreTap(CoreInterface): @@ -345,55 +577,47 @@ class GreTap(CoreInterface): def __init__( self, - session: "Session", - remoteip: str, - key: int = None, node: "CoreNode" = None, - mtu: int = DEFAULT_MTU, + name: str = None, + session: "Session" = None, + mtu: int = 1458, + remoteip: str = None, _id: int = None, localip: str = None, ttl: int = 255, + key: int = None, + start: bool = True, server: "DistributedServer" = None, ) -> None: """ Creates a GreTap instance. - :param session: session for this gre tap - :param remoteip: remote address - :param key: gre tap key :param node: related core node + :param name: interface name + :param session: core session instance :param mtu: interface mtu + :param remoteip: remote address :param _id: object id :param localip: local address :param ttl: ttl value + :param key: gre tap key + :param start: start flag :param server: remote server node will run on, default is None for localhost :raises CoreCommandError: when there is a command exception """ if _id is None: _id = ((id(self) >> 16) ^ (id(self) & 0xFFFF)) & 0xFFFF - self.id: int = _id + self.id = _id sessionid = session.short_session_id() localname = f"gt.{self.id}.{sessionid}" - name = f"{localname}p" - super().__init__(0, name, localname, session.use_ovs(), mtu, node, server) - self.transport_type: TransportType = TransportType.RAW - self.remote_ip: str = remoteip - self.ttl: int = ttl - self.key: Optional[int] = key - self.local_ip: Optional[str] = localip - - def startup(self) -> None: - """ - Startup logic for a GreTap. - - :return: nothing - """ - self.net_client.create_gretap( - self.localname, self.remote_ip, self.local_ip, self.ttl, self.key - ) - if self.mtu > 0: - self.net_client.set_mtu(self.localname, self.mtu) + super().__init__(session, node, name, localname, mtu, server) + self.transport_type = TransportType.RAW + if not start: + return + if remoteip is None: + raise CoreError("missing remote IP required for GRE TAP device") + self.net_client.create_gretap(self.localname, remoteip, localip, ttl, key) self.net_client.device_up(self.localname) self.up = True @@ -408,5 +632,5 @@ class GreTap(CoreInterface): self.net_client.device_down(self.localname) self.net_client.delete_device(self.localname) except CoreCommandError: - logger.exception("error during shutdown") + logging.exception("error during shutdown") self.localname = None diff --git a/daemon/core/nodes/lxd.py b/daemon/core/nodes/lxd.py index e4cba002..9773cb95 100644 --- a/daemon/core/nodes/lxd.py +++ b/daemon/core/nodes/lxd.py @@ -1,48 +1,81 @@ import json import logging -import shlex +import os import time -from dataclasses import dataclass, field -from pathlib import Path from tempfile import NamedTemporaryFile -from typing import TYPE_CHECKING +from typing import TYPE_CHECKING, Callable, Dict, Optional -from core.emulator.data import InterfaceData, LinkOptions +from core import utils from core.emulator.distributed import DistributedServer +from core.emulator.enumerations import NodeTypes from core.errors import CoreCommandError -from core.executables import BASH -from core.nodes.base import CoreNode, CoreNodeOptions +from core.nodes.base import CoreNode from core.nodes.interface import CoreInterface -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emulator.session import Session -@dataclass -class LxcOptions(CoreNodeOptions): - image: str = "ubuntu" - """image used when creating container""" - binds: list[tuple[str, str]] = field(default_factory=list) - """bind mount source and destinations to setup within container""" - volumes: list[tuple[str, str, bool, bool]] = field(default_factory=list) - """ - volume mount source, destination, unique, delete to setup within container +class LxdClient: + def __init__(self, name: str, image: str, run: Callable[..., str]) -> None: + self.name: str = name + self.image: str = image + self.run: Callable[..., str] = run + self.pid: Optional[int] = None - unique is True for node unique volume naming - delete is True for deleting volume mount during shutdown - """ + def create_container(self) -> int: + self.run(f"lxc launch {self.image} {self.name}") + data = self.get_info() + self.pid = data["state"]["pid"] + return self.pid + + def get_info(self) -> Dict: + args = f"lxc list {self.name} --format json" + output = self.run(args) + data = json.loads(output) + if not data: + raise CoreCommandError(1, args, f"LXC({self.name}) not present") + return data[0] + + def is_alive(self) -> bool: + try: + data = self.get_info() + return data["state"]["status"] == "Running" + except CoreCommandError: + return False + + def stop_container(self) -> None: + self.run(f"lxc delete --force {self.name}") + + def create_cmd(self, cmd: str) -> str: + return f"lxc exec -nT {self.name} -- {cmd}" + + def create_ns_cmd(self, cmd: str) -> str: + return f"nsenter -t {self.pid} -m -u -i -p -n {cmd}" + + def check_cmd(self, cmd: str, wait: bool = True, shell: bool = False) -> str: + args = self.create_cmd(cmd) + return utils.cmd(args, wait=wait, shell=shell) + + def copy_file(self, source: str, destination: str) -> None: + if destination[0] != "/": + destination = os.path.join("/root/", destination) + + args = f"lxc file push {source} {self.name}/{destination}" + self.run(args) class LxcNode(CoreNode): + apitype = NodeTypes.LXC + def __init__( self, session: "Session", _id: int = None, name: str = None, + nodedir: str = None, server: DistributedServer = None, - options: LxcOptions = None, + image: str = None, ) -> None: """ Create a LxcNode instance. @@ -50,37 +83,15 @@ class LxcNode(CoreNode): :param session: core session instance :param _id: object id :param name: object name + :param nodedir: node directory :param server: remote server node will run on, default is None for localhost - :param options: option to create node with + :param image: image to start container with """ - options = options or LxcOptions() - super().__init__(session, _id, name, server, options) - self.image: str = options.image - - @classmethod - def create_options(cls) -> LxcOptions: - return LxcOptions() - - def create_cmd(self, args: str, shell: bool = False) -> str: - """ - Create command used to run commands within the context of a node. - - :param args: command arguments - :param shell: True to run shell like, False otherwise - :return: node command - """ - if shell: - args = f"{BASH} -c {shlex.quote(args)}" - return f"nsenter -t {self.pid} -m -u -i -p -n {args}" - - def _get_info(self) -> dict: - args = f"lxc list {self.name} --format json" - output = self.host_cmd(args) - data = json.loads(output) - if not data: - raise CoreCommandError(1, args, f"LXC({self.name}) not present") - return data[0] + if image is None: + image = "ubuntu" + self.image: str = image + super().__init__(session, _id, name, nodedir, server) def alive(self) -> bool: """ @@ -88,11 +99,7 @@ class LxcNode(CoreNode): :return: True if node is alive, False otherwise """ - try: - data = self._get_info() - return data["state"]["status"] == "Running" - except CoreCommandError: - return False + return self.client.is_alive() def startup(self) -> None: """ @@ -104,9 +111,8 @@ class LxcNode(CoreNode): if self.up: raise ValueError("starting a node that is already up") self.makenodedir() - self.host_cmd(f"lxc launch {self.image} {self.name}") - data = self._get_info() - self.pid = data["state"]["pid"] + self.client = LxdClient(self.name, self.image, self.host_cmd) + self.pid = self.client.create_container() self.up = True def shutdown(self) -> None: @@ -118,9 +124,10 @@ class LxcNode(CoreNode): # nothing to do if node is not up if not self.up: return + with self.lock: self.ifaces.clear() - self.host_cmd(f"lxc delete --force {self.name}") + self.client.stop_container() self.up = False def termcmdstring(self, sh: str = "/bin/sh") -> str: @@ -130,92 +137,85 @@ class LxcNode(CoreNode): :param sh: shell to execute command in :return: str """ - terminal = f"lxc exec {self.name} -- {sh}" - if self.server is None: - return terminal - else: - return f"ssh -X -f {self.server.host} xterm -e {terminal}" + return f"lxc exec {self.name} -- {sh}" - def create_dir(self, dir_path: Path) -> None: + def privatedir(self, path: str) -> None: """ Create a private directory. - :param dir_path: path to create + :param path: path to create :return: nothing """ - logger.info("creating node dir: %s", dir_path) - args = f"mkdir -p {dir_path}" + logging.info("creating node dir: %s", path) + args = f"mkdir -p {path}" self.cmd(args) - def mount(self, src_path: Path, target_path: Path) -> None: + def mount(self, source: str, target: str) -> None: """ Create and mount a directory. - :param src_path: source directory to mount - :param target_path: target directory to create + :param source: source directory to mount + :param target: target directory to create :return: nothing :raises CoreCommandError: when a non-zero exit status occurs """ - logger.debug("mounting source(%s) target(%s)", src_path, target_path) + logging.debug("mounting source(%s) target(%s)", source, target) raise Exception("not supported") - def create_file(self, file_path: Path, contents: str, mode: int = 0o644) -> None: + def nodefile(self, filename: str, contents: str, mode: int = 0o644) -> None: """ Create a node file with a given mode. - :param file_path: name of file to create + :param filename: name of file to create :param contents: contents of file :param mode: mode for file :return: nothing """ - logger.debug("node(%s) create file(%s) mode(%o)", self.name, file_path, mode) + logging.debug("nodefile filename(%s) mode(%s)", filename, mode) + + directory = os.path.dirname(filename) temp = NamedTemporaryFile(delete=False) - temp.write(contents.encode()) + temp.write(contents.encode("utf-8")) temp.close() - temp_path = Path(temp.name) - directory = file_path.parent - if str(directory) != ".": + + if directory: self.cmd(f"mkdir -m {0o755:o} -p {directory}") if self.server is not None: - self.server.remote_put(temp_path, temp_path) - if not str(file_path).startswith("/"): - file_path = Path("/root/") / file_path - self.host_cmd(f"lxc file push {temp_path} {self.name}/{file_path}") - self.cmd(f"chmod {mode:o} {file_path}") + self.server.remote_put(temp.name, temp.name) + self.client.copy_file(temp.name, filename) + self.cmd(f"chmod {mode:o} {filename}") if self.server is not None: - self.host_cmd(f"rm -f {temp_path}") - temp_path.unlink() - logger.debug("node(%s) added file: %s; mode: 0%o", self.name, file_path, mode) + self.host_cmd(f"rm -f {temp.name}") + os.unlink(temp.name) + logging.debug("node(%s) added file: %s; mode: 0%o", self.name, filename, mode) - def copy_file(self, src_path: Path, dst_path: Path, mode: int = None) -> None: + def nodefilecopy(self, filename: str, srcfilename: str, mode: int = None) -> None: """ Copy a file to a node, following symlinks and preserving metadata. Change file mode if specified. - :param dst_path: file name to copy file to - :param src_path: file to copy + :param filename: file name to copy file to + :param srcfilename: file to copy :param mode: mode to copy to :return: nothing """ - logger.info( - "node file copy file(%s) source(%s) mode(%o)", dst_path, src_path, mode or 0 + logging.info( + "node file copy file(%s) source(%s) mode(%s)", filename, srcfilename, mode ) - self.cmd(f"mkdir -p {dst_path.parent}") - if self.server: - temp = NamedTemporaryFile(delete=False) - temp_path = Path(temp.name) - src_path = temp_path - self.server.remote_put(src_path, temp_path) - if not str(dst_path).startswith("/"): - dst_path = Path("/root/") / dst_path - self.host_cmd(f"lxc file push {src_path} {self.name}/{dst_path}") - if mode is not None: - self.cmd(f"chmod {mode:o} {dst_path}") + directory = os.path.dirname(filename) + self.cmd(f"mkdir -p {directory}") - def create_iface( - self, iface_data: InterfaceData = None, options: LinkOptions = None - ) -> CoreInterface: - iface = super().create_iface(iface_data, options) + if self.server is None: + source = srcfilename + else: + temp = NamedTemporaryFile(delete=False) + source = temp.name + self.server.remote_put(source, temp.name) + + self.client.copy_file(source, filename) + self.cmd(f"chmod {mode:o} {filename}") + + def add_iface(self, iface: CoreInterface, iface_id: int) -> None: + super().add_iface(iface, iface_id) # adding small delay to allow time for adding addresses to work correctly time.sleep(0.5) - return iface diff --git a/daemon/core/nodes/netclient.py b/daemon/core/nodes/netclient.py index 74087e31..729550b6 100644 --- a/daemon/core/nodes/netclient.py +++ b/daemon/core/nodes/netclient.py @@ -5,7 +5,6 @@ from typing import Callable import netaddr -from core import utils from core.executables import ETHTOOL, IP, OVS_VSCTL, SYSCTL, TC @@ -29,7 +28,6 @@ class LinuxNetClient: :param name: name for hostname :return: nothing """ - name = name.replace("_", "-") self.run(f"hostname {name}") def create_route(self, route: str, device: str) -> None: @@ -40,7 +38,7 @@ class LinuxNetClient: :param device: device to add route to :return: nothing """ - self.run(f"{IP} route replace {route} dev {device}") + self.run(f"{IP} route add {route} dev {device}") def device_up(self, device: str) -> None: """ @@ -97,14 +95,14 @@ class LinuxNetClient: """ return self.run(f"cat /sys/class/net/{device}/address") - def get_ifindex(self, device: str) -> int: + def get_ifindex(self, device: str) -> str: """ Retrieve ifindex for a given device. :param device: device to get ifindex for :return: ifindex """ - return int(self.run(f"cat /sys/class/net/{device}/ifindex")) + return self.run(f"cat /sys/class/net/{device}/ifindex") def device_ns(self, device: str, namespace: str) -> None: """ @@ -178,7 +176,6 @@ class LinuxNetClient: if netaddr.valid_ipv6(address.split("/")[0]): # IPv6 addresses are removed by default on interface down. # Make sure that the IPv6 address we add is not removed - device = utils.sysctl_devname(device) self.run(f"{SYSCTL} -w net.ipv6.conf.{device}.keep_addr_on_down=1") def delete_address(self, device: str, address: str) -> None: @@ -299,16 +296,6 @@ class LinuxNetClient: """ self.run(f"{IP} link set {name} type bridge ageing_time {value}") - def set_mtu(self, name: str, value: int) -> None: - """ - Sets the mtu value for a device. - - :param name: name of device to set value for - :param value: mtu value to set - :return: nothing - """ - self.run(f"{IP} link set {name} mtu {value}") - class OvsNetClient(LinuxNetClient): """ @@ -374,15 +361,14 @@ class OvsNetClient(LinuxNetClient): return True return False - def set_mac_learning(self, name: str, value: int) -> None: + def disable_mac_learning(self, name: str) -> None: """ - Set mac learning for an OVS bridge. + Disable mac learning for a OVS bridge. :param name: bridge name - :param value: ageing time value :return: nothing """ - self.run(f"{OVS_VSCTL} set bridge {name} other_config:mac-aging-time={value}") + self.run(f"{OVS_VSCTL} set bridge {name} other_config:mac-aging-time=0") def get_net_client(use_ovs: bool, run: Callable[..., str]) -> LinuxNetClient: diff --git a/daemon/core/nodes/network.py b/daemon/core/nodes/network.py index 1ea9c31e..cb3aca79 100644 --- a/daemon/core/nodes/network.py +++ b/daemon/core/nodes/network.py @@ -3,188 +3,254 @@ Defines network nodes used within core. """ import logging +import math import threading -from dataclasses import dataclass -from pathlib import Path -from typing import TYPE_CHECKING, Optional +import time +from typing import TYPE_CHECKING, Callable, Dict, List, Optional, Type import netaddr from core import utils -from core.emulator.data import InterfaceData, LinkData -from core.emulator.enumerations import MessageFlags, NetworkPolicy, RegisterTlvs +from core.emulator.data import InterfaceData, LinkData, LinkOptions +from core.emulator.enumerations import ( + LinkTypes, + MessageFlags, + NetworkPolicy, + NodeTypes, + RegisterTlvs, +) from core.errors import CoreCommandError, CoreError -from core.executables import NFTABLES -from core.nodes.base import CoreNetworkBase, NodeOptions -from core.nodes.interface import CoreInterface, GreTap +from core.executables import EBTABLES, TC +from core.nodes.base import CoreNetworkBase +from core.nodes.interface import CoreInterface, GreTap, Veth from core.nodes.netclient import get_net_client -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emulator.distributed import DistributedServer from core.emulator.session import Session from core.location.mobility import WirelessModel, WayPointMobility + WirelessModelType = Type[WirelessModel] + LEARNING_DISABLED: int = 0 +ebtables_lock: threading.Lock = threading.Lock() -class NftablesQueue: +class EbtablesQueue: """ - Helper class for queuing up nftables commands into rate-limited + Helper class for queuing up ebtables commands into rate-limited atomic commits. This improves performance and reliability when there are many WLAN link updates. """ # update rate is every 300ms rate: float = 0.3 - atomic_file: str = "/tmp/pycore.nftables.atomic" - chain: str = "forward" + # ebtables + atomic_file: str = "/tmp/pycore.ebtables.atomic" def __init__(self) -> None: """ Initialize the helper class, but don't start the update thread until a WLAN is instantiated. """ - self.running: bool = False - self.run_thread: Optional[threading.Thread] = None + self.doupdateloop: bool = False + self.updatethread: Optional[threading.Thread] = None # this lock protects cmds and updates lists - self.lock: threading.Lock = threading.Lock() - # list of pending nftables commands - self.cmds: list[str] = [] + self.updatelock: threading.Lock = threading.Lock() + # list of pending ebtables commands + self.cmds: List[str] = [] # list of WLANs requiring update - self.updates: utils.SetQueue = utils.SetQueue() + self.updates: List["CoreNetwork"] = [] + # timestamps of last WLAN update; this keeps track of WLANs that are + # using this queue + self.last_update_time: Dict["CoreNetwork", float] = {} - def start(self) -> None: + def startupdateloop(self, wlan: "CoreNetwork") -> None: """ - Start thread to listen for updates for the provided network. + Kick off the update loop; only needs to be invoked once. :return: nothing """ - with self.lock: - if not self.running: - self.running = True - self.run_thread = threading.Thread(target=self.run, daemon=True) - self.run_thread.start() - - def stop(self) -> None: - """ - Stop updates for network, when no networks remain, stop update thread. - - :return: nothing - """ - with self.lock: - if self.running: - self.running = False - self.updates.put(None) - self.run_thread.join() - self.run_thread = None - - def run(self) -> None: - """ - Thread target that looks for networks needing update, and - rate limits the amount of nftables activity. Only one userspace program - should use nftables at any given time, or results can be unpredictable. - - :return: nothing - """ - while self.running: - net = self.updates.get() - if net is None: - break - self.build_cmds(net) - self.commit(net) - - def commit(self, net: "CoreNetwork") -> None: - """ - Commit changes to nftables for the provided network. - - :param net: network to commit nftables changes - :return: nothing - """ - if not self.cmds: + with self.updatelock: + self.last_update_time[wlan] = time.monotonic() + if self.doupdateloop: return - # write out nft commands to file - for cmd in self.cmds: - net.host_cmd(f"echo {cmd} >> {self.atomic_file}", shell=True) - # read file as atomic change - net.host_cmd(f"{NFTABLES} -f {self.atomic_file}") - # remove file - net.host_cmd(f"rm -f {self.atomic_file}") - self.cmds.clear() + self.doupdateloop = True + self.updatethread = threading.Thread(target=self.updateloop, daemon=True) + self.updatethread.start() - def update(self, net: "CoreNetwork") -> None: + def stopupdateloop(self, wlan: "CoreNetwork") -> None: """ - Flag this network has an update, so the nftables chain will be rebuilt. + Kill the update loop thread if there are no more WLANs using it. - :param net: wlan network :return: nothing """ - self.updates.put(net) - - def delete_table(self, net: "CoreNetwork") -> None: - """ - Delete nftable bridge rule table. - - :param net: network to delete table for - :return: nothing - """ - with self.lock: - net.host_cmd(f"{NFTABLES} delete table bridge {net.brname}") - - def build_cmds(self, net: "CoreNetwork") -> None: - """ - Inspect linked nodes for a network, and rebuild the nftables chain commands. - - :param net: network to build commands for - :return: nothing - """ - with net.linked_lock: - if net.has_nftables_chain: - self.cmds.append(f"flush table bridge {net.brname}") - else: - net.has_nftables_chain = True - policy = net.policy.value.lower() - self.cmds.append(f"add table bridge {net.brname}") - self.cmds.append( - f"add chain bridge {net.brname} {self.chain} {{type filter hook " - f"forward priority -1\\; policy {policy}\\;}}" + with self.updatelock: + try: + del self.last_update_time[wlan] + except KeyError: + logging.exception( + "error deleting last update time for wlan, ignored before: %s", wlan + ) + if len(self.last_update_time) > 0: + return + self.doupdateloop = False + if self.updatethread: + self.updatethread.join() + self.updatethread = None + + def ebatomiccmd(self, cmd: str) -> str: + """ + Helper for building ebtables atomic file command list. + + :param cmd: ebtable command + :return: ebtable atomic command + """ + return f"{EBTABLES} --atomic-file {self.atomic_file} {cmd}" + + def lastupdate(self, wlan: "CoreNetwork") -> float: + """ + Return the time elapsed since this WLAN was last updated. + + :param wlan: wlan entity + :return: elpased time + """ + try: + elapsed = time.monotonic() - self.last_update_time[wlan] + except KeyError: + self.last_update_time[wlan] = time.monotonic() + elapsed = 0.0 + + return elapsed + + def updated(self, wlan: "CoreNetwork") -> None: + """ + Keep track of when this WLAN was last updated. + + :param wlan: wlan entity + :return: nothing + """ + self.last_update_time[wlan] = time.monotonic() + self.updates.remove(wlan) + + def updateloop(self) -> None: + """ + Thread target that looks for WLANs needing update, and + rate limits the amount of ebtables activity. Only one userspace program + should use ebtables at any given time, or results can be unpredictable. + + :return: nothing + """ + while self.doupdateloop: + with self.updatelock: + for wlan in self.updates: + # Check if wlan is from a previously closed session. Because of the + # rate limiting scheme employed here, this may happen if a new session + # is started soon after closing a previous session. + # TODO: if these are WlanNodes, this will never throw an exception + try: + wlan.session + except Exception: + # Just mark as updated to remove from self.updates. + self.updated(wlan) + continue + + if self.lastupdate(wlan) > self.rate: + self.buildcmds(wlan) + self.ebcommit(wlan) + self.updated(wlan) + + time.sleep(self.rate) + + def ebcommit(self, wlan: "CoreNetwork") -> None: + """ + Perform ebtables atomic commit using commands built in the self.cmds list. + + :return: nothing + """ + # save kernel ebtables snapshot to a file + args = self.ebatomiccmd("--atomic-save") + wlan.host_cmd(args) + + # modify the table file using queued ebtables commands + for c in self.cmds: + args = self.ebatomiccmd(c) + wlan.host_cmd(args) + self.cmds = [] + + # commit the table file to the kernel + args = self.ebatomiccmd("--atomic-commit") + wlan.host_cmd(args) + + try: + wlan.host_cmd(f"rm -f {self.atomic_file}") + except CoreCommandError: + logging.exception("error removing atomic file: %s", self.atomic_file) + + def ebchange(self, wlan: "CoreNetwork") -> None: + """ + Flag a change to the given WLAN's _linked dict, so the ebtables + chain will be rebuilt at the next interval. + + :return: nothing + """ + with self.updatelock: + if wlan not in self.updates: + self.updates.append(wlan) + + def buildcmds(self, wlan: "CoreNetwork") -> None: + """ + Inspect a _linked dict from a wlan, and rebuild the ebtables chain for that WLAN. + + :return: nothing + """ + with wlan._linked_lock: + if wlan.has_ebtables_chain: + # flush the chain + self.cmds.append(f"-F {wlan.brname}") + else: + wlan.has_ebtables_chain = True + self.cmds.extend( + [ + f"-N {wlan.brname} -P {wlan.policy.value}", + f"-A FORWARD --logical-in {wlan.brname} -j {wlan.brname}", + ] ) - # add default rule to accept all traffic not for this bridge - self.cmds.append( - f"add rule bridge {net.brname} {self.chain} " - f"ibriport != {net.brname} accept" - ) # rebuild the chain - for iface1, v in net.linked.items(): - for iface2, linked in v.items(): - policy = None - if net.policy == NetworkPolicy.DROP and linked: - policy = "accept" - elif net.policy == NetworkPolicy.ACCEPT and not linked: - policy = "drop" - if policy: - self.cmds.append( - f"add rule bridge {net.brname} {self.chain} " - f"iif {iface1.localname} oif {iface2.localname} " - f"{policy}" + for iface1, v in wlan._linked.items(): + for oface2, linked in v.items(): + if wlan.policy == NetworkPolicy.DROP and linked: + self.cmds.extend( + [ + f"-A {wlan.brname} -i {iface1.localname} -o {oface2.localname} -j ACCEPT", + f"-A {wlan.brname} -o {iface1.localname} -i {oface2.localname} -j ACCEPT", + ] ) - self.cmds.append( - f"add rule bridge {net.brname} {self.chain} " - f"oif {iface1.localname} iif {iface2.localname} " - f"{policy}" + elif wlan.policy == NetworkPolicy.ACCEPT and not linked: + self.cmds.extend( + [ + f"-A {wlan.brname} -i {iface1.localname} -o {oface2.localname} -j DROP", + f"-A {wlan.brname} -o {iface1.localname} -i {oface2.localname} -j DROP", + ] ) -# a global object because all networks share the same queue -# cannot have multiple threads invoking the nftables commnd -nft_queue: NftablesQueue = NftablesQueue() +# a global object because all WLANs share the same queue +# cannot have multiple threads invoking the ebtables commnd +ebq: EbtablesQueue = EbtablesQueue() -@dataclass -class NetworkOptions(NodeOptions): - policy: NetworkPolicy = None - """allows overriding the network policy, otherwise uses class defined default""" +def ebtablescmds(call: Callable[..., str], cmds: List[str]) -> None: + """ + Run ebtable commands. + + :param call: function to call commands + :param cmds: commands to call + :return: nothing + """ + with ebtables_lock: + for args in cmds: + call(args) class CoreNetwork(CoreNetworkBase): @@ -200,34 +266,33 @@ class CoreNetwork(CoreNetworkBase): _id: int = None, name: str = None, server: "DistributedServer" = None, - options: NetworkOptions = None, + policy: NetworkPolicy = None, ) -> None: """ - Creates a CoreNetwork instance. + Creates a LxBrNet instance. :param session: core session instance :param _id: object id :param name: object name :param server: remote server node will run on, default is None for localhost - :param options: options to create node with + :param policy: network policy """ - options = options or NetworkOptions() - super().__init__(session, _id, name, server, options) - self.policy: NetworkPolicy = options.policy if options.policy else self.policy + super().__init__(session, _id, name, server) + if name is None: + name = str(self.id) + if policy is not None: + self.policy = policy + self.name: Optional[str] = name sessionid = self.session.short_session_id() self.brname: str = f"b.{self.id}.{sessionid}" - self.has_nftables_chain: bool = False - - @classmethod - def create_options(cls) -> NetworkOptions: - return NetworkOptions() + self.has_ebtables_chain: bool = False def host_cmd( self, args: str, - env: dict[str, str] = None, - cwd: Path = None, + env: Dict[str, str] = None, + cwd: str = None, wait: bool = True, shell: bool = False, ) -> str: @@ -243,35 +308,22 @@ class CoreNetwork(CoreNetworkBase): :return: combined stdout and stderr :raises CoreCommandError: when a non-zero exit status occurs """ - logger.debug("network node(%s) cmd", self.name) + logging.debug("network node(%s) cmd", self.name) output = utils.cmd(args, env, cwd, wait, shell) self.session.distributed.execute(lambda x: x.remote_cmd(args, env, cwd, wait)) return output def startup(self) -> None: """ - Linux bridge startup logic. + Linux bridge starup logic. :return: nothing :raises CoreCommandError: when there is a command exception """ self.net_client.create_bridge(self.brname) - if self.mtu > 0: - self.net_client.set_mtu(self.brname, self.mtu) - self.has_nftables_chain = False + self.has_ebtables_chain = False self.up = True - nft_queue.start() - - def adopt_iface(self, iface: CoreInterface, name: str) -> None: - """ - Adopt interface and set it to use this bridge as master. - - :param iface: interface to adpopt - :param name: formal name for interface - :return: nothing - """ - iface.net_client.set_iface_master(self.brname, iface.name) - iface.set_config() + ebq.startupdateloop(self) def shutdown(self) -> None: """ @@ -281,18 +333,27 @@ class CoreNetwork(CoreNetworkBase): """ if not self.up: return - nft_queue.stop() + + ebq.stopupdateloop(self) + try: self.net_client.delete_bridge(self.brname) - if self.has_nftables_chain: - nft_queue.delete_table(self) + if self.has_ebtables_chain: + cmds = [ + f"{EBTABLES} -D FORWARD --logical-in {self.brname} -j {self.brname}", + f"{EBTABLES} -X {self.brname}", + ] + ebtablescmds(self.host_cmd, cmds) except CoreCommandError: logging.exception("error during shutdown") + # removes veth pairs used for bridge-to-bridge connections for iface in self.get_ifaces(): iface.shutdown() + self.ifaces.clear() - self.linked.clear() + self._linked.clear() + del self.session self.up = False def attach(self, iface: CoreInterface) -> None: @@ -302,9 +363,9 @@ class CoreNetwork(CoreNetworkBase): :param iface: network interface to attach :return: nothing """ - super().attach(iface) if self.up: iface.net_client.set_iface_master(self.brname, iface.localname) + super().attach(iface) def detach(self, iface: CoreInterface) -> None: """ @@ -313,11 +374,11 @@ class CoreNetwork(CoreNetworkBase): :param iface: network interface to detach :return: nothing """ - super().detach(iface) if self.up: iface.net_client.delete_iface(self.brname, iface.localname) + super().detach(iface) - def is_linked(self, iface1: CoreInterface, iface2: CoreInterface) -> bool: + def linked(self, iface1: CoreInterface, iface2: CoreInterface) -> bool: """ Determine if the provided network interfaces are linked. @@ -328,10 +389,12 @@ class CoreNetwork(CoreNetworkBase): # check if the network interfaces are attached to this network if self.ifaces[iface1.net_id] != iface1: raise ValueError(f"inconsistency for interface {iface1.name}") + if self.ifaces[iface2.net_id] != iface2: raise ValueError(f"inconsistency for interface {iface2.name}") + try: - linked = self.linked[iface1][iface2] + linked = self._linked[iface1][iface2] except KeyError: if self.policy == NetworkPolicy.ACCEPT: linked = True @@ -339,37 +402,176 @@ class CoreNetwork(CoreNetworkBase): linked = False else: raise Exception(f"unknown policy: {self.policy.value}") - self.linked[iface1][iface2] = linked + self._linked[iface1][iface2] = linked + return linked def unlink(self, iface1: CoreInterface, iface2: CoreInterface) -> None: """ - Unlink two interfaces, resulting in adding or removing filtering rules. - - :param iface1: interface one - :param iface2: interface two - :return: nothing - """ - with self.linked_lock: - if not self.is_linked(iface1, iface2): - return - self.linked[iface1][iface2] = False - nft_queue.update(self) - - def link(self, iface1: CoreInterface, iface2: CoreInterface) -> None: - """ - Link two interfaces together, resulting in adding or removing + Unlink two interfaces, resulting in adding or removing ebtables filtering rules. :param iface1: interface one :param iface2: interface two :return: nothing """ - with self.linked_lock: - if self.is_linked(iface1, iface2): + with self._linked_lock: + if not self.linked(iface1, iface2): return - self.linked[iface1][iface2] = True - nft_queue.update(self) + self._linked[iface1][iface2] = False + + ebq.ebchange(self) + + def link(self, iface1: CoreInterface, iface2: CoreInterface) -> None: + """ + Link two interfaces together, resulting in adding or removing + ebtables filtering rules. + + :param iface1: interface one + :param iface2: interface two + :return: nothing + """ + with self._linked_lock: + if self.linked(iface1, iface2): + return + self._linked[iface1][iface2] = True + + ebq.ebchange(self) + + def linkconfig( + self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None + ) -> None: + """ + Configure link parameters by applying tc queuing disciplines on the interface. + + :param iface: interface one + :param options: options for configuring link + :param iface2: interface two + :return: nothing + """ + # determine if any settings have changed + changed = any( + [ + iface.setparam("bw", options.bandwidth), + iface.setparam("delay", options.delay), + iface.setparam("loss", options.loss), + iface.setparam("duplicate", options.dup), + iface.setparam("jitter", options.jitter), + iface.setparam("buffer", options.buffer), + ] + ) + if not changed: + return + + # delete tc configuration or create and add it + devname = iface.localname + if all( + [ + options.delay is None or options.delay <= 0, + options.jitter is None or options.jitter <= 0, + options.loss is None or options.loss <= 0, + options.dup is None or options.dup <= 0, + options.bandwidth is None or options.bandwidth <= 0, + options.buffer is None or options.buffer <= 0, + ] + ): + if not iface.getparam("has_netem"): + return + if self.up: + cmd = f"{TC} qdisc delete dev {devname} root handle 10:" + iface.host_cmd(cmd) + iface.setparam("has_netem", False) + else: + netem = "" + if options.bandwidth is not None: + limit = 1000 + bw = options.bandwidth / 1000 + if options.buffer is not None and options.buffer > 0: + limit = options.buffer + elif options.delay and options.bandwidth: + delay = options.delay / 1000 + limit = max(2, math.ceil((2 * bw * delay) / (8 * iface.mtu))) + netem += f" rate {bw}kbit" + netem += f" limit {limit}" + if options.delay is not None: + netem += f" delay {options.delay}us" + if options.jitter is not None: + if options.delay is None: + netem += f" delay 0us {options.jitter}us 25%" + else: + netem += f" {options.jitter}us 25%" + if options.loss is not None and options.loss > 0: + netem += f" loss {min(options.loss, 100)}%" + if options.dup is not None and options.dup > 0: + netem += f" duplicate {min(options.dup, 100)}%" + if self.up: + cmd = f"{TC} qdisc replace dev {devname} root handle 10: netem {netem}" + iface.host_cmd(cmd) + iface.setparam("has_netem", True) + + def linknet(self, net: CoreNetworkBase) -> CoreInterface: + """ + Link this bridge with another by creating a veth pair and installing + each device into each bridge. + + :param net: network to link with + :return: created interface + """ + sessionid = self.session.short_session_id() + try: + _id = f"{self.id:x}" + except TypeError: + _id = str(self.id) + + try: + net_id = f"{net.id:x}" + except TypeError: + net_id = str(net.id) + + localname = f"veth{_id}.{net_id}.{sessionid}" + if len(localname) >= 16: + raise ValueError(f"interface local name {localname} too long") + + name = f"veth{net_id}.{_id}.{sessionid}" + if len(name) >= 16: + raise ValueError(f"interface name {name} too long") + + iface = Veth(self.session, None, name, localname, start=self.up) + self.attach(iface) + if net.up and net.brname: + iface.net_client.set_iface_master(net.brname, iface.name) + i = net.next_iface_id() + net.ifaces[i] = iface + with net._linked_lock: + net._linked[iface] = {} + iface.net = self + iface.othernet = net + return iface + + def get_linked_iface(self, net: CoreNetworkBase) -> Optional[CoreInterface]: + """ + Return the interface of that links this net with another net + (that were linked using linknet()). + + :param net: interface to get link for + :return: interface the provided network is linked to + """ + for iface in self.get_ifaces(): + if iface.othernet == net: + return iface + return None + + def add_ips(self, ips: List[str]) -> None: + """ + Add ip addresses on the bridge in the format "10.0.0.1/24". + + :param ips: ip address to add + :return: nothing + """ + if not self.up: + return + for ip in ips: + self.net_client.create_address(self.brname, ip) class GreTapBridge(CoreNetwork): @@ -414,15 +616,14 @@ class GreTapBridge(CoreNetwork): self.localip: Optional[str] = localip self.ttl: int = ttl self.gretap: Optional[GreTap] = None - if self.remoteip is not None: + if remoteip is not None: self.gretap = GreTap( - session, - remoteip, - key=self.grekey, node=self, + session=session, + remoteip=remoteip, localip=localip, ttl=ttl, - mtu=self.mtu, + key=self.grekey, ) def startup(self) -> None: @@ -433,7 +634,6 @@ class GreTapBridge(CoreNetwork): """ super().startup() if self.gretap: - self.gretap.startup() self.attach(self.gretap) def shutdown(self) -> None: @@ -448,7 +648,7 @@ class GreTapBridge(CoreNetwork): self.gretap = None super().shutdown() - def add_ips(self, ips: list[str]) -> None: + def add_ips(self, ips: List[str]) -> None: """ Set the remote tunnel endpoint. This is a one-time method for creating the GreTap device, which requires the remoteip at startup. @@ -459,20 +659,18 @@ class GreTapBridge(CoreNetwork): :return: nothing """ if self.gretap: - raise CoreError(f"gretap already exists for {self.name}") + raise ValueError(f"gretap already exists for {self.name}") remoteip = ips[0].split("/")[0] localip = None if len(ips) > 1: localip = ips[1].split("/")[0] self.gretap = GreTap( - self.session, - remoteip, - key=self.grekey, + session=self.session, + remoteip=remoteip, localip=localip, ttl=self.ttl, - mtu=self.mtu, + key=self.grekey, ) - self.startup() self.attach(self.gretap) def setkey(self, key: int, iface_data: InterfaceData) -> None: @@ -490,20 +688,6 @@ class GreTapBridge(CoreNetwork): self.add_ips(ips) -@dataclass -class CtrlNetOptions(NetworkOptions): - prefix: str = None - """ip4 network prefix to use for generating an address""" - updown_script: str = None - """script to execute during startup and shutdown""" - serverintf: str = None - """used to associate an interface with the control network bridge""" - assign_address: bool = True - """used to determine if a specific address should be assign using hostid""" - hostid: int = None - """used with assign address to """ - - class CtrlNet(CoreNetwork): """ Control network functionality. @@ -512,7 +696,7 @@ class CtrlNet(CoreNetwork): policy: NetworkPolicy = NetworkPolicy.ACCEPT # base control interface index CTRLIF_IDX_BASE: int = 99 - DEFAULT_PREFIX_LIST: list[str] = [ + DEFAULT_PREFIX_LIST: List[str] = [ "172.16.0.0/24 172.16.1.0/24 172.16.2.0/24 172.16.3.0/24 172.16.4.0/24", "172.17.0.0/24 172.17.1.0/24 172.17.2.0/24 172.17.3.0/24 172.17.4.0/24", "172.18.0.0/24 172.18.1.0/24 172.18.2.0/24 172.18.3.0/24 172.18.4.0/24", @@ -522,32 +706,36 @@ class CtrlNet(CoreNetwork): def __init__( self, session: "Session", + prefix: str, _id: int = None, name: str = None, + hostid: int = None, server: "DistributedServer" = None, - options: CtrlNetOptions = None, + assign_address: bool = True, + updown_script: str = None, + serverintf: str = None, ) -> None: """ Creates a CtrlNet instance. :param session: core session instance :param _id: node id - :param name: node name + :param name: node namee + :param prefix: control network ipv4 prefix + :param hostid: host id :param server: remote server node will run on, default is None for localhost - :param options: node options for creation + :param assign_address: assigned address + :param updown_script: updown script + :param serverintf: server interface + :return: """ - options = options or CtrlNetOptions() - super().__init__(session, _id, name, server, options) - self.prefix: netaddr.IPNetwork = netaddr.IPNetwork(options.prefix).cidr - self.hostid: Optional[int] = options.hostid - self.assign_address: bool = options.assign_address - self.updown_script: Optional[str] = options.updown_script - self.serverintf: Optional[str] = options.serverintf - - @classmethod - def create_options(cls) -> CtrlNetOptions: - return CtrlNetOptions() + self.prefix: netaddr.IPNetwork = netaddr.IPNetwork(prefix).cidr + self.hostid: Optional[int] = hostid + self.assign_address: bool = assign_address + self.updown_script: Optional[str] = updown_script + self.serverintf: Optional[str] = serverintf + super().__init__(session, _id, name, server) def add_addresses(self, index: int) -> None: """ @@ -581,7 +769,7 @@ class CtrlNet(CoreNetwork): raise CoreError(f"old bridges exist for node: {self.id}") super().startup() - logger.info("added control network bridge: %s %s", self.brname, self.prefix) + logging.info("added control network bridge: %s %s", self.brname, self.prefix) if self.hostid and self.assign_address: self.add_addresses(self.hostid) @@ -589,7 +777,7 @@ class CtrlNet(CoreNetwork): self.add_addresses(-2) if self.updown_script: - logger.info( + logging.info( "interface %s updown script (%s startup) called", self.brname, self.updown_script, @@ -609,7 +797,7 @@ class CtrlNet(CoreNetwork): try: self.net_client.delete_iface(self.brname, self.serverintf) except CoreCommandError: - logger.exception( + logging.exception( "error deleting server interface %s from bridge %s", self.serverintf, self.brname, @@ -617,17 +805,26 @@ class CtrlNet(CoreNetwork): if self.updown_script is not None: try: - logger.info( + logging.info( "interface %s updown script (%s shutdown) called", self.brname, self.updown_script, ) self.host_cmd(f"{self.updown_script} {self.brname} shutdown") except CoreCommandError: - logger.exception("error issuing shutdown script shutdown") + logging.exception("error issuing shutdown script shutdown") super().shutdown() + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: + """ + Do not include CtrlNet in link messages describing this session. + + :param flags: message flags + :return: list of link data + """ + return [] + class PtpNet(CoreNetwork): """ @@ -647,14 +844,80 @@ class PtpNet(CoreNetwork): raise CoreError("ptp links support at most 2 network interfaces") super().attach(iface) - def startup(self) -> None: + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: """ - Startup for a p2p node, that disables mac learning after normal startup. + Build CORE API TLVs for a point-to-point link. One Link message + describes this network. - :return: nothing + :param flags: message flags + :return: list of link data """ - super().startup() - self.net_client.set_mac_learning(self.brname, LEARNING_DISABLED) + all_links = [] + if len(self.ifaces) != 2: + return all_links + + ifaces = self.get_ifaces() + iface1 = ifaces[0] + iface2 = ifaces[1] + unidirectional = 0 + if iface1.getparams() != iface2.getparams(): + unidirectional = 1 + + mac = str(iface1.mac) if iface1.mac else None + iface1_data = InterfaceData( + id=iface1.node.get_iface_id(iface1), name=iface1.name, mac=mac + ) + ip4 = iface1.get_ip4() + if ip4: + iface1_data.ip4 = str(ip4.ip) + iface1_data.ip4_mask = ip4.prefixlen + ip6 = iface1.get_ip6() + if ip6: + iface1_data.ip6 = str(ip6.ip) + iface1_data.ip6_mask = ip6.prefixlen + + mac = str(iface2.mac) if iface2.mac else None + iface2_data = InterfaceData( + id=iface2.node.get_iface_id(iface2), name=iface2.name, mac=mac + ) + ip4 = iface2.get_ip4() + if ip4: + iface2_data.ip4 = str(ip4.ip) + iface2_data.ip4_mask = ip4.prefixlen + ip6 = iface2.get_ip6() + if ip6: + iface2_data.ip6 = str(ip6.ip) + iface2_data.ip6_mask = ip6.prefixlen + + options_data = iface1.get_link_options(unidirectional) + link_data = LinkData( + message_type=flags, + type=self.linktype, + node1_id=iface1.node.id, + node2_id=iface2.node.id, + iface1=iface1_data, + iface2=iface2_data, + options=options_data, + ) + all_links.append(link_data) + + # build a 2nd link message for the upstream link parameters + # (swap if1 and if2) + if unidirectional: + iface1_data = InterfaceData(id=iface2.node.get_iface_id(iface2)) + iface2_data = InterfaceData(id=iface1.node.get_iface_id(iface1)) + options_data = iface2.get_link_options(unidirectional) + link_data = LinkData( + message_type=MessageFlags.NONE, + type=self.linktype, + node1_id=iface2.node.id, + node2_id=iface1.node.id, + iface1=iface1_data, + iface2=iface2_data, + options=options_data, + ) + all_links.append(link_data) + return all_links class SwitchNode(CoreNetwork): @@ -662,7 +925,9 @@ class SwitchNode(CoreNetwork): Provides switch functionality within a core node. """ + apitype: NodeTypes = NodeTypes.SWITCH policy: NetworkPolicy = NetworkPolicy.ACCEPT + type: str = "lanswitch" class HubNode(CoreNetwork): @@ -671,7 +936,9 @@ class HubNode(CoreNetwork): ports by turning off MAC address learning. """ + apitype: NodeTypes = NodeTypes.HUB policy: NetworkPolicy = NetworkPolicy.ACCEPT + type: str = "hub" def startup(self) -> None: """ @@ -688,7 +955,10 @@ class WlanNode(CoreNetwork): Provides wireless lan functionality within a core node. """ + apitype: NodeTypes = NodeTypes.WIRELESS_LAN + linktype: LinkTypes = LinkTypes.WIRED policy: NetworkPolicy = NetworkPolicy.DROP + type: str = "wlan" def __init__( self, @@ -696,7 +966,7 @@ class WlanNode(CoreNetwork): _id: int = None, name: str = None, server: "DistributedServer" = None, - options: NetworkOptions = None, + policy: NetworkPolicy = None, ) -> None: """ Create a WlanNode instance. @@ -706,11 +976,11 @@ class WlanNode(CoreNetwork): :param name: node name :param server: remote server node will run on, default is None for localhost - :param options: options to create node with + :param policy: wlan policy """ - super().__init__(session, _id, name, server, options) + super().__init__(session, _id, name, server, policy) # wireless and mobility models (BasicRangeModel, Ns2WaypointMobility) - self.wireless_model: Optional[WirelessModel] = None + self.model: Optional[WirelessModel] = None self.mobility: Optional[WayPointMobility] = None def startup(self) -> None: @@ -720,7 +990,7 @@ class WlanNode(CoreNetwork): :return: nothing """ super().startup() - nft_queue.update(self) + ebq.ebchange(self) def attach(self, iface: CoreInterface) -> None: """ @@ -730,55 +1000,55 @@ class WlanNode(CoreNetwork): :return: nothing """ super().attach(iface) - if self.wireless_model: - iface.poshook = self.wireless_model.position_callback + if self.model: + iface.poshook = self.model.position_callback iface.setposition() - def setmodel(self, wireless_model: type["WirelessModel"], config: dict[str, str]): + def setmodel(self, model: "WirelessModelType", config: Dict[str, str]): """ Sets the mobility and wireless model. - :param wireless_model: wireless model to set to + :param model: wireless model to set to :param config: configuration for model being set :return: nothing """ - logger.debug("node(%s) setting model: %s", self.name, wireless_model.name) - if wireless_model.config_type == RegisterTlvs.WIRELESS: - self.wireless_model = wireless_model(session=self.session, _id=self.id) + logging.debug("node(%s) setting model: %s", self.name, model.name) + if model.config_type == RegisterTlvs.WIRELESS: + self.model = model(session=self.session, _id=self.id) for iface in self.get_ifaces(): - iface.poshook = self.wireless_model.position_callback + iface.poshook = self.model.position_callback iface.setposition() self.updatemodel(config) - elif wireless_model.config_type == RegisterTlvs.MOBILITY: - self.mobility = wireless_model(session=self.session, _id=self.id) + elif model.config_type == RegisterTlvs.MOBILITY: + self.mobility = model(session=self.session, _id=self.id) self.mobility.update_config(config) - def update_mobility(self, config: dict[str, str]) -> None: + def update_mobility(self, config: Dict[str, str]) -> None: if not self.mobility: raise CoreError(f"no mobility set to update for node({self.name})") self.mobility.update_config(config) - def updatemodel(self, config: dict[str, str]) -> None: - if not self.wireless_model: + def updatemodel(self, config: Dict[str, str]) -> None: + if not self.model: raise CoreError(f"no model set to update for node({self.name})") - logger.debug( - "node(%s) updating model(%s): %s", self.id, self.wireless_model.name, config + logging.debug( + "node(%s) updating model(%s): %s", self.id, self.model.name, config ) - self.wireless_model.update_config(config) + self.model.update_config(config) for iface in self.get_ifaces(): iface.setposition() - def links(self, flags: MessageFlags = MessageFlags.NONE) -> list[LinkData]: + def links(self, flags: MessageFlags = MessageFlags.NONE) -> List[LinkData]: """ Retrieve all link data. :param flags: message flags :return: list of link data """ - if self.wireless_model: - return self.wireless_model.links(flags) - else: - return [] + links = super().links(flags) + if self.model: + links.extend(self.model.links(flags)) + return links class TunnelNode(GreTapBridge): @@ -786,4 +1056,6 @@ class TunnelNode(GreTapBridge): Provides tunnel functionality in a core node. """ + apitype: NodeTypes = NodeTypes.TUNNEL policy: NetworkPolicy = NetworkPolicy.ACCEPT + type: str = "tunnel" diff --git a/daemon/core/nodes/physical.py b/daemon/core/nodes/physical.py index 30640fd8..4e8c9464 100644 --- a/daemon/core/nodes/physical.py +++ b/daemon/core/nodes/physical.py @@ -3,38 +3,257 @@ PhysicalNode class for including real systems in the emulated network. """ import logging -from pathlib import Path -from typing import TYPE_CHECKING, Optional - -import netaddr +import os +import threading +from typing import IO, TYPE_CHECKING, List, Optional, Tuple from core.emulator.data import InterfaceData, LinkOptions from core.emulator.distributed import DistributedServer -from core.emulator.enumerations import TransportType +from core.emulator.enumerations import NodeTypes, TransportType from core.errors import CoreCommandError, CoreError -from core.executables import BASH, TEST, UMOUNT -from core.nodes.base import CoreNode, CoreNodeBase, CoreNodeOptions, NodeOptions -from core.nodes.interface import CoreInterface - -logger = logging.getLogger(__name__) +from core.executables import MOUNT, TEST, UMOUNT +from core.nodes.base import CoreNetworkBase, CoreNodeBase +from core.nodes.interface import DEFAULT_MTU, CoreInterface +from core.nodes.network import CoreNetwork, GreTap if TYPE_CHECKING: from core.emulator.session import Session +class PhysicalNode(CoreNodeBase): + def __init__( + self, + session: "Session", + _id: int = None, + name: str = None, + nodedir: str = None, + server: DistributedServer = None, + ) -> None: + super().__init__(session, _id, name, server) + if not self.server: + raise CoreError("physical nodes must be assigned to a remote server") + self.nodedir: Optional[str] = nodedir + self.lock: threading.RLock = threading.RLock() + self._mounts: List[Tuple[str, str]] = [] + + def startup(self) -> None: + with self.lock: + self.makenodedir() + self.up = True + + def shutdown(self) -> None: + if not self.up: + return + + with self.lock: + while self._mounts: + _source, target = self._mounts.pop(-1) + self.umount(target) + + for iface in self.get_ifaces(): + iface.shutdown() + + self.rmnodedir() + + def path_exists(self, path: str) -> bool: + """ + Determines if a file or directory path exists. + + :param path: path to file or directory + :return: True if path exists, False otherwise + """ + try: + self.host_cmd(f"{TEST} -e {path}") + return True + except CoreCommandError: + return False + + def termcmdstring(self, sh: str = "/bin/sh") -> str: + """ + Create a terminal command string. + + :param sh: shell to execute command in + :return: str + """ + return sh + + def set_mac(self, iface_id: int, mac: str) -> None: + """ + Set mac address for an interface. + + :param iface_id: index of interface to set hardware address for + :param mac: mac address to set + :return: nothing + :raises CoreCommandError: when a non-zero exit status occurs + """ + iface = self.get_iface(iface_id) + iface.set_mac(mac) + if self.up: + self.net_client.device_mac(iface.name, str(iface.mac)) + + def add_ip(self, iface_id: int, ip: str) -> None: + """ + Add an ip address to an interface in the format "10.0.0.1/24". + + :param iface_id: id of interface to add address to + :param ip: address to add to interface + :return: nothing + :raises CoreError: when ip address provided is invalid + :raises CoreCommandError: when a non-zero exit status occurs + """ + iface = self.get_iface(iface_id) + iface.add_ip(ip) + if self.up: + self.net_client.create_address(iface.name, ip) + + def remove_ip(self, iface_id: int, ip: str) -> None: + """ + Remove an ip address from an interface in the format "10.0.0.1/24". + + :param iface_id: id of interface to delete address from + :param ip: ip address to remove from interface + :return: nothing + :raises CoreError: when ip address provided is invalid + :raises CoreCommandError: when a non-zero exit status occurs + """ + iface = self.get_iface(iface_id) + iface.remove_ip(ip) + if self.up: + self.net_client.delete_address(iface.name, ip) + + def adopt_iface( + self, iface: CoreInterface, iface_id: int, mac: str, ips: List[str] + ) -> None: + """ + When a link message is received linking this node to another part of + the emulation, no new interface is created; instead, adopt the + GreTap interface as the node interface. + """ + iface.name = f"gt{iface_id}" + iface.node = self + self.add_iface(iface, iface_id) + # use a more reasonable name, e.g. "gt0" instead of "gt.56286.150" + if self.up: + self.net_client.device_down(iface.localname) + self.net_client.device_name(iface.localname, iface.name) + iface.localname = iface.name + if mac: + self.set_mac(iface_id, mac) + for ip in ips: + self.add_ip(iface_id, ip) + if self.up: + self.net_client.device_up(iface.localname) + + def linkconfig( + self, iface: CoreInterface, options: LinkOptions, iface2: CoreInterface = None + ) -> None: + """ + Apply tc queing disciplines using linkconfig. + """ + linux_bridge = CoreNetwork(self.session) + linux_bridge.up = True + linux_bridge.linkconfig(iface, options, iface2) + del linux_bridge + + def next_iface_id(self) -> int: + with self.lock: + while self.iface_id in self.ifaces: + self.iface_id += 1 + iface_id = self.iface_id + self.iface_id += 1 + return iface_id + + def new_iface( + self, net: CoreNetworkBase, iface_data: InterfaceData + ) -> CoreInterface: + logging.info("creating interface") + ips = iface_data.get_ips() + iface_id = iface_data.id + if iface_id is None: + iface_id = self.next_iface_id() + name = iface_data.name + if name is None: + name = f"gt{iface_id}" + if self.up: + # this is reached when this node is linked to a network node + # tunnel to net not built yet, so build it now and adopt it + _, remote_tap = self.session.distributed.create_gre_tunnel(net, self.server) + self.adopt_iface(remote_tap, iface_id, iface_data.mac, ips) + return remote_tap + else: + # this is reached when configuring services (self.up=False) + iface = GreTap(node=self, name=name, session=self.session, start=False) + self.adopt_iface(iface, iface_id, iface_data.mac, ips) + return iface + + def privatedir(self, path: str) -> None: + if path[0] != "/": + raise ValueError(f"path not fully qualified: {path}") + hostpath = os.path.join( + self.nodedir, os.path.normpath(path).strip("/").replace("/", ".") + ) + os.mkdir(hostpath) + self.mount(hostpath, path) + + def mount(self, source: str, target: str) -> None: + source = os.path.abspath(source) + logging.info("mounting %s at %s", source, target) + os.makedirs(target) + self.host_cmd(f"{MOUNT} --bind {source} {target}", cwd=self.nodedir) + self._mounts.append((source, target)) + + def umount(self, target: str) -> None: + logging.info("unmounting '%s'", target) + try: + self.host_cmd(f"{UMOUNT} -l {target}", cwd=self.nodedir) + except CoreCommandError: + logging.exception("unmounting failed for %s", target) + + def opennodefile(self, filename: str, mode: str = "w") -> IO: + dirname, basename = os.path.split(filename) + if not basename: + raise ValueError("no basename for filename: " + filename) + + if dirname and dirname[0] == "/": + dirname = dirname[1:] + + dirname = dirname.replace("/", ".") + dirname = os.path.join(self.nodedir, dirname) + if not os.path.isdir(dirname): + os.makedirs(dirname, mode=0o755) + + hostfilename = os.path.join(dirname, basename) + return open(hostfilename, mode) + + def nodefile(self, filename: str, contents: str, mode: int = 0o644) -> None: + with self.opennodefile(filename, "w") as f: + f.write(contents) + os.chmod(f.name, mode) + logging.info("created nodefile: '%s'; mode: 0%o", f.name, mode) + + def cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: + return self.host_cmd(args, wait=wait) + + def addfile(self, srcname: str, filename: str) -> None: + raise CoreError("physical node does not support addfile") + + class Rj45Node(CoreNodeBase): """ RJ45Node is a physical interface on the host linked to the emulated network. """ + apitype: NodeTypes = NodeTypes.RJ45 + type: str = "rj45" + def __init__( self, session: "Session", _id: int = None, name: str = None, + mtu: int = DEFAULT_MTU, server: DistributedServer = None, - options: NodeOptions = None, ) -> None: """ Create an RJ45Node instance. @@ -42,17 +261,19 @@ class Rj45Node(CoreNodeBase): :param session: core session instance :param _id: node id :param name: node name + :param mtu: rj45 mtu :param server: remote server node will run on, default is None for localhost - :param options: option to create node with """ - super().__init__(session, _id, name, server, options) + super().__init__(session, _id, name, server) self.iface: CoreInterface = CoreInterface( - self.iface_id, name, name, session.use_ovs(), node=self, server=server + session, self, name, name, mtu, server ) self.iface.transport_type = TransportType.RAW + self.lock: threading.RLock = threading.RLock() + self.iface_id: Optional[int] = None self.old_up: bool = False - self.old_addrs: list[tuple[str, Optional[str]]] = [] + self.old_addrs: List[Tuple[str, Optional[str]]] = [] def startup(self) -> None: """ @@ -62,7 +283,7 @@ class Rj45Node(CoreNodeBase): :raises CoreCommandError: when there is a command exception """ # interface will also be marked up during net.attach() - self.save_state() + self.savestate() self.net_client.device_up(self.iface.localname) self.up = True @@ -83,7 +304,7 @@ class Rj45Node(CoreNodeBase): except CoreCommandError: pass self.up = False - self.restore_state() + self.restorestate() def path_exists(self, path: str) -> bool: """ @@ -98,28 +319,33 @@ class Rj45Node(CoreNodeBase): except CoreCommandError: return False - def create_iface( - self, iface_data: InterfaceData = None, options: LinkOptions = None + def new_iface( + self, net: CoreNetworkBase, iface_data: InterfaceData ) -> CoreInterface: - with self.lock: - if self.iface.id in self.ifaces: - raise CoreError( - f"rj45({self.name}) nodes support at most 1 network interface" - ) - if iface_data and iface_data.mtu is not None: - self.iface.mtu = iface_data.mtu - self.iface.ip4s.clear() - self.iface.ip6s.clear() - for ip in iface_data.get_ips(): - self.iface.add_ip(ip) - self.ifaces[self.iface.id] = self.iface - if self.up: - for ip in self.iface.ips(): - self.net_client.create_address(self.iface.name, str(ip)) - return self.iface + """ + This is called when linking with another node. Since this node + represents an interface, we do not create another object here, + but attach ourselves to the given network. - def adopt_iface(self, iface: CoreInterface, name: str) -> None: - raise CoreError(f"rj45({self.name}) does not support adopt interface") + :param net: new network instance + :param iface_data: interface data for new interface + :return: interface index + :raises ValueError: when an interface has already been created, one max + """ + with self.lock: + iface_id = iface_data.id + if iface_id is None: + iface_id = 0 + if self.iface.net is not None: + raise CoreError( + f"RJ45({self.name}) nodes support at most 1 network interface" + ) + self.ifaces[iface_id] = self.iface + self.iface_id = iface_id + self.iface.attachnet(net) + for ip in iface_data.get_ips(): + self.add_ip(ip) + return self.iface def delete_iface(self, iface_id: int) -> None: """ @@ -130,10 +356,16 @@ class Rj45Node(CoreNodeBase): """ self.get_iface(iface_id) self.ifaces.pop(iface_id) + if self.iface.net is None: + raise CoreError( + f"RJ45({self.name}) is not currently connected to a network" + ) + self.iface.detachnet() + self.iface.net = None self.shutdown() def get_iface(self, iface_id: int) -> CoreInterface: - if iface_id not in self.ifaces: + if iface_id != self.iface_id or iface_id not in self.ifaces: raise CoreError(f"node({self.name}) interface({iface_id}) does not exist") return self.iface @@ -147,19 +379,44 @@ class Rj45Node(CoreNodeBase): """ if iface is not self.iface: raise CoreError(f"node({self.name}) does not have interface({iface.name})") - return self.iface.id + return self.iface_id - def save_state(self) -> None: + def add_ip(self, ip: str) -> None: + """ + Add an ip address to an interface in the format "10.0.0.1/24". + + :param ip: address to add to interface + :return: nothing + :raises CoreError: when ip address provided is invalid + :raises CoreCommandError: when a non-zero exit status occurs + """ + self.iface.add_ip(ip) + if self.up: + self.net_client.create_address(self.name, ip) + + def remove_ip(self, ip: str) -> None: + """ + Remove an ip address from an interface in the format "10.0.0.1/24". + + :param ip: ip address to remove from interface + :return: nothing + :raises CoreError: when ip address provided is invalid + :raises CoreCommandError: when a non-zero exit status occurs + """ + self.iface.remove_ip(ip) + if self.up: + self.net_client.delete_address(self.name, ip) + + def savestate(self) -> None: """ Save the addresses and other interface state before using the - interface for emulation purposes. + interface for emulation purposes. TODO: save/restore the PROMISC flag :return: nothing :raises CoreCommandError: when there is a command exception """ - # TODO: save/restore the PROMISC flag self.old_up = False - self.old_addrs: list[tuple[str, Optional[str]]] = [] + self.old_addrs: List[Tuple[str, Optional[str]]] = [] localname = self.iface.localname output = self.net_client.address_show(localname) for line in output.split("\n"): @@ -171,17 +428,14 @@ class Rj45Node(CoreNodeBase): if "UP" in flags: self.old_up = True elif items[0] == "inet": - broadcast = None - if items[2] == "brd": - broadcast = items[3] - self.old_addrs.append((items[1], broadcast)) + self.old_addrs.append((items[1], items[3])) elif items[0] == "inet6": if items[1][:4] == "fe80": continue self.old_addrs.append((items[1], None)) - logger.info("saved rj45 state: addrs(%s) up(%s)", self.old_addrs, self.old_up) + logging.info("saved rj45 state: addrs(%s) up(%s)", self.old_addrs, self.old_up) - def restore_state(self) -> None: + def restorestate(self) -> None: """ Restore the addresses and other interface state after using it. @@ -189,7 +443,7 @@ class Rj45Node(CoreNodeBase): :raises CoreCommandError: when there is a command exception """ localname = self.iface.localname - logger.info("restoring rj45 state: %s", localname) + logging.info("restoring rj45 state: %s", localname) for addr in self.old_addrs: self.net_client.create_address(localname, addr[0], addr[1]) if self.old_up: @@ -210,80 +464,11 @@ class Rj45Node(CoreNodeBase): def termcmdstring(self, sh: str) -> str: raise CoreError("rj45 does not support terminal commands") + def addfile(self, srcname: str, filename: str) -> None: + raise CoreError("rj45 does not support addfile") + + def nodefile(self, filename: str, contents: str, mode: int = 0o644) -> None: + raise CoreError("rj45 does not support nodefile") + def cmd(self, args: str, wait: bool = True, shell: bool = False) -> str: raise CoreError("rj45 does not support cmds") - - def create_dir(self, dir_path: Path) -> None: - raise CoreError("rj45 does not support creating directories") - - def create_file(self, file_path: Path, contents: str, mode: int = 0o644) -> None: - raise CoreError("rj45 does not support creating files") - - def copy_file(self, src_path: Path, dst_path: Path, mode: int = None) -> None: - raise CoreError("rj45 does not support copying files") - - -class PhysicalNode(CoreNode): - def __init__( - self, - session: "Session", - _id: int = None, - name: str = None, - server: DistributedServer = None, - options: CoreNodeOptions = None, - ) -> None: - if not self.server: - raise CoreError("physical nodes must be assigned to a remote server") - super().__init__(session, _id, name, server, options) - - def startup(self) -> None: - with self.lock: - self.makenodedir() - self.up = True - - def shutdown(self) -> None: - if not self.up: - return - with self.lock: - while self._mounts: - _, target_path = self._mounts.pop(-1) - self.umount(target_path) - for iface in self.get_ifaces(): - iface.shutdown() - self.rmnodedir() - - def create_cmd(self, args: str, shell: bool = False) -> str: - if shell: - args = f'{BASH} -c "{args}"' - return args - - def adopt_iface(self, iface: CoreInterface, name: str) -> None: - # validate iface belongs to node and get id - iface_id = self.get_iface_id(iface) - if iface_id == -1: - raise CoreError(f"adopting unknown iface({iface.name})") - # turn checksums off - self.node_net_client.checksums_off(iface.name) - # retrieve flow id for container - iface.flow_id = self.node_net_client.get_ifindex(iface.name) - logger.debug("interface flow index: %s - %s", iface.name, iface.flow_id) - if iface.mac: - self.net_client.device_mac(iface.name, str(iface.mac)) - # set all addresses - for ip in iface.ips(): - # ipv4 check - broadcast = None - if netaddr.valid_ipv4(ip): - broadcast = "+" - self.node_net_client.create_address(iface.name, str(ip), broadcast) - # configure iface options - iface.set_config() - # set iface up - self.net_client.device_up(iface.name) - - def umount(self, target_path: Path) -> None: - logger.info("unmounting '%s'", target_path) - try: - self.host_cmd(f"{UMOUNT} -l {target_path}", cwd=self.directory) - except CoreCommandError: - logger.exception("unmounting failed for %s", target_path) diff --git a/daemon/core/nodes/podman.py b/daemon/core/nodes/podman.py deleted file mode 100644 index 00ef24fc..00000000 --- a/daemon/core/nodes/podman.py +++ /dev/null @@ -1,271 +0,0 @@ -import json -import logging -import shlex -from dataclasses import dataclass, field -from pathlib import Path -from tempfile import NamedTemporaryFile -from typing import TYPE_CHECKING - -from core.emulator.distributed import DistributedServer -from core.errors import CoreCommandError, CoreError -from core.executables import BASH -from core.nodes.base import CoreNode, CoreNodeOptions - -logger = logging.getLogger(__name__) - -if TYPE_CHECKING: - from core.emulator.session import Session - -PODMAN: str = "podman" - - -@dataclass -class PodmanOptions(CoreNodeOptions): - image: str = "ubuntu" - """image used when creating container""" - binds: list[tuple[str, str]] = field(default_factory=list) - """bind mount source and destinations to setup within container""" - volumes: list[tuple[str, str, bool, bool]] = field(default_factory=list) - """ - volume mount source, destination, unique, delete to setup within container - - unique is True for node unique volume naming - delete is True for deleting volume mount during shutdown - """ - - -@dataclass -class VolumeMount: - src: str - """volume mount name""" - dst: str - """volume mount destination directory""" - unique: bool = True - """True to create a node unique prefixed name for this volume""" - delete: bool = True - """True to delete the volume during shutdown""" - path: str = None - """path to the volume on the host""" - - -class PodmanNode(CoreNode): - """ - Provides logic for creating a Podman based node. - """ - - def __init__( - self, - session: "Session", - _id: int = None, - name: str = None, - server: DistributedServer = None, - options: PodmanOptions = None, - ) -> None: - """ - Create a PodmanNode instance. - - :param session: core session instance - :param _id: node id - :param name: node name - :param server: remote server node - will run on, default is None for localhost - :param options: options for creating node - """ - options = options or PodmanOptions() - super().__init__(session, _id, name, server, options) - self.image: str = options.image - self.binds: list[tuple[str, str]] = options.binds - self.volumes: dict[str, VolumeMount] = {} - for src, dst, unique, delete in options.volumes: - src_name = self._unique_name(src) if unique else src - self.volumes[src] = VolumeMount(src_name, dst, unique, delete) - - @classmethod - def create_options(cls) -> PodmanOptions: - """ - Return default creation options, which can be used during node creation. - - :return: podman options - """ - return PodmanOptions() - - def create_cmd(self, args: str, shell: bool = False) -> str: - """ - Create command used to run commands within the context of a node. - - :param args: command arguments - :param shell: True to run shell like, False otherwise - :return: node command - """ - if shell: - args = f"{BASH} -c {shlex.quote(args)}" - return f"{PODMAN} exec {self.name} {args}" - - def _unique_name(self, name: str) -> str: - """ - Creates a session/node unique prefixed name for the provided input. - - :param name: name to make unique - :return: unique session/node prefixed name - """ - return f"{self.session.id}.{self.id}.{name}" - - def alive(self) -> bool: - """ - Check if the node is alive. - - :return: True if node is alive, False otherwise - """ - try: - running = self.host_cmd( - f"{PODMAN} inspect -f '{{{{.State.Running}}}}' {self.name}" - ) - return json.loads(running) - except CoreCommandError: - return False - - def startup(self) -> None: - """ - Create a podman container instance for the specified image. - - :return: nothing - """ - with self.lock: - if self.up: - raise CoreError(f"starting node({self.name}) that is already up") - # create node directory - self.makenodedir() - # setup commands for creating bind/volume mounts - binds = "" - for src, dst in self.binds: - binds += f"--mount type=bind,source={src},target={dst} " - volumes = "" - for volume in self.volumes.values(): - volumes += ( - f"--mount type=volume," f"source={volume.src},target={volume.dst} " - ) - # normalize hostname - hostname = self.name.replace("_", "-") - # create container and retrieve the created containers PID - self.host_cmd( - f"{PODMAN} run -td --init --net=none --hostname {hostname} " - f"--name {self.name} --sysctl net.ipv6.conf.all.disable_ipv6=0 " - f"{binds} {volumes} " - f"--privileged {self.image} tail -f /dev/null" - ) - # retrieve pid and process environment for use in nsenter commands - self.pid = self.host_cmd( - f"{PODMAN} inspect -f '{{{{.State.Pid}}}}' {self.name}" - ) - # setup symlinks for bind and volume mounts within - for src, dst in self.binds: - link_path = self.host_path(Path(dst), True) - self.host_cmd(f"ln -s {src} {link_path}") - for volume in self.volumes.values(): - volume.path = self.host_cmd( - f"{PODMAN} volume inspect -f '{{{{.Mountpoint}}}}' {volume.src}" - ) - link_path = self.host_path(Path(volume.dst), True) - self.host_cmd(f"ln -s {volume.path} {link_path}") - logger.debug("node(%s) pid: %s", self.name, self.pid) - self.up = True - - def shutdown(self) -> None: - """ - Shutdown logic. - - :return: nothing - """ - # nothing to do if node is not up - if not self.up: - return - with self.lock: - self.ifaces.clear() - self.host_cmd(f"{PODMAN} rm -f {self.name}") - for volume in self.volumes.values(): - if volume.delete: - self.host_cmd(f"{PODMAN} volume rm {volume.src}") - self.up = False - - def termcmdstring(self, sh: str = "/bin/sh") -> str: - """ - Create a terminal command string. - - :param sh: shell to execute command in - :return: str - """ - terminal = f"{PODMAN} exec -it {self.name} {sh}" - if self.server is None: - return terminal - else: - return f"ssh -X -f {self.server.host} xterm -e {terminal}" - - def create_dir(self, dir_path: Path) -> None: - """ - Create a private directory. - - :param dir_path: path to create - :return: nothing - """ - logger.debug("creating node dir: %s", dir_path) - self.cmd(f"mkdir -p {dir_path}") - - def mount(self, src_path: str, target_path: str) -> None: - """ - Create and mount a directory. - - :param src_path: source directory to mount - :param target_path: target directory to create - :return: nothing - :raises CoreCommandError: when a non-zero exit status occurs - """ - logger.debug("mounting source(%s) target(%s)", src_path, target_path) - raise Exception("not supported") - - def create_file(self, file_path: Path, contents: str, mode: int = 0o644) -> None: - """ - Create a node file with a given mode. - - :param file_path: name of file to create - :param contents: contents of file - :param mode: mode for file - :return: nothing - """ - logger.debug("node(%s) create file(%s) mode(%o)", self.name, file_path, mode) - temp = NamedTemporaryFile(delete=False) - temp.write(contents.encode()) - temp.close() - temp_path = Path(temp.name) - directory = file_path.parent - if str(directory) != ".": - self.cmd(f"mkdir -m {0o755:o} -p {directory}") - if self.server is not None: - self.server.remote_put(temp_path, temp_path) - self.host_cmd(f"{PODMAN} cp {temp_path} {self.name}:{file_path}") - self.cmd(f"chmod {mode:o} {file_path}") - if self.server is not None: - self.host_cmd(f"rm -f {temp_path}") - temp_path.unlink() - - def copy_file(self, src_path: Path, dst_path: Path, mode: int = None) -> None: - """ - Copy a file to a node, following symlinks and preserving metadata. - Change file mode if specified. - - :param dst_path: file name to copy file to - :param src_path: file to copy - :param mode: mode to copy to - :return: nothing - """ - logger.info( - "node file copy file(%s) source(%s) mode(%o)", dst_path, src_path, mode or 0 - ) - self.cmd(f"mkdir -p {dst_path.parent}") - if self.server: - temp = NamedTemporaryFile(delete=False) - temp_path = Path(temp.name) - src_path = temp_path - self.server.remote_put(src_path, temp_path) - self.host_cmd(f"{PODMAN} cp {src_path} {self.name}:{dst_path}") - if mode is not None: - self.cmd(f"chmod {mode:o} {dst_path}") diff --git a/daemon/core/nodes/wireless.py b/daemon/core/nodes/wireless.py deleted file mode 100644 index 51a98917..00000000 --- a/daemon/core/nodes/wireless.py +++ /dev/null @@ -1,345 +0,0 @@ -""" -Defines a wireless node that allows programmatic link connectivity and -configuration between pairs of nodes. -""" -import copy -import logging -import math -import secrets -from dataclasses import dataclass -from typing import TYPE_CHECKING - -from core.config import ConfigBool, ConfigFloat, ConfigInt, Configuration -from core.emulator.data import LinkData, LinkOptions -from core.emulator.enumerations import LinkTypes, MessageFlags -from core.errors import CoreError -from core.executables import NFTABLES -from core.nodes.base import CoreNetworkBase, NodeOptions -from core.nodes.interface import CoreInterface - -if TYPE_CHECKING: - from core.emulator.session import Session - from core.emulator.distributed import DistributedServer - -logger = logging.getLogger(__name__) -CONFIG_ENABLED: bool = True -CONFIG_RANGE: float = 400.0 -CONFIG_LOSS_RANGE: float = 300.0 -CONFIG_LOSS_FACTOR: float = 1.0 -CONFIG_LOSS: float = 0.0 -CONFIG_DELAY: int = 5000 -CONFIG_BANDWIDTH: int = 54_000_000 -CONFIG_JITTER: int = 0 -KEY_ENABLED: str = "movement" -KEY_RANGE: str = "max-range" -KEY_BANDWIDTH: str = "bandwidth" -KEY_DELAY: str = "delay" -KEY_JITTER: str = "jitter" -KEY_LOSS_RANGE: str = "loss-range" -KEY_LOSS_FACTOR: str = "loss-factor" -KEY_LOSS: str = "loss" - - -def calc_distance( - point1: tuple[float, float, float], point2: tuple[float, float, float] -) -> float: - a = point1[0] - point2[0] - b = point1[1] - point2[1] - c = 0 - if point1[2] is not None and point2[2] is not None: - c = point1[2] - point2[2] - return math.hypot(math.hypot(a, b), c) - - -def get_key(node1_id: int, node2_id: int) -> tuple[int, int]: - return (node1_id, node2_id) if node1_id < node2_id else (node2_id, node1_id) - - -@dataclass -class WirelessLink: - bridge1: str - bridge2: str - iface: CoreInterface - linked: bool - label: str = None - - -class WirelessNode(CoreNetworkBase): - options: list[Configuration] = [ - ConfigBool( - id=KEY_ENABLED, default="1" if CONFIG_ENABLED else "0", label="Enabled?" - ), - ConfigFloat( - id=KEY_RANGE, default=str(CONFIG_RANGE), label="Max Range (pixels)" - ), - ConfigInt( - id=KEY_BANDWIDTH, default=str(CONFIG_BANDWIDTH), label="Bandwidth (bps)" - ), - ConfigInt(id=KEY_DELAY, default=str(CONFIG_DELAY), label="Delay (usec)"), - ConfigInt(id=KEY_JITTER, default=str(CONFIG_JITTER), label="Jitter (usec)"), - ConfigFloat( - id=KEY_LOSS_RANGE, - default=str(CONFIG_LOSS_RANGE), - label="Loss Start Range (pixels)", - ), - ConfigFloat( - id=KEY_LOSS_FACTOR, default=str(CONFIG_LOSS_FACTOR), label="Loss Factor" - ), - ConfigFloat(id=KEY_LOSS, default=str(CONFIG_LOSS), label="Loss Initial"), - ] - devices: set[str] = set() - - @classmethod - def add_device(cls) -> str: - while True: - name = f"we{secrets.token_hex(6)}" - if name not in cls.devices: - cls.devices.add(name) - break - return name - - @classmethod - def delete_device(cls, name: str) -> None: - cls.devices.discard(name) - - def __init__( - self, - session: "Session", - _id: int, - name: str, - server: "DistributedServer" = None, - options: NodeOptions = None, - ): - super().__init__(session, _id, name, server, options) - self.bridges: dict[int, tuple[CoreInterface, str]] = {} - self.links: dict[tuple[int, int], WirelessLink] = {} - self.position_enabled: bool = CONFIG_ENABLED - self.bandwidth: int = CONFIG_BANDWIDTH - self.delay: int = CONFIG_DELAY - self.jitter: int = CONFIG_JITTER - self.max_range: float = CONFIG_RANGE - self.loss_initial: float = CONFIG_LOSS - self.loss_range: float = CONFIG_LOSS_RANGE - self.loss_factor: float = CONFIG_LOSS_FACTOR - - def startup(self) -> None: - if self.up: - return - self.up = True - - def shutdown(self) -> None: - while self.bridges: - _, (_, bridge_name) = self.bridges.popitem() - self.net_client.delete_bridge(bridge_name) - self.host_cmd(f"{NFTABLES} delete table bridge {bridge_name}") - while self.links: - _, link = self.links.popitem() - link.iface.shutdown() - self.up = False - - def attach(self, iface: CoreInterface) -> None: - super().attach(iface) - logging.info("attaching node(%s) iface(%s)", iface.node.name, iface.name) - if self.up: - # create node unique bridge - bridge_name = f"wb{iface.node.id}.{self.id}.{self.session.id}" - self.net_client.create_bridge(bridge_name) - # setup initial bridge rules - self.host_cmd(f'{NFTABLES} "add table bridge {bridge_name}"') - self.host_cmd( - f"{NFTABLES} " - f"'add chain bridge {bridge_name} forward {{type filter hook " - f"forward priority -1; policy drop;}}'" - ) - self.host_cmd( - f"{NFTABLES} " - f"'add rule bridge {bridge_name} forward " - f"ibriport != {bridge_name} accept'" - ) - # associate node iface with bridge - iface.net_client.set_iface_master(bridge_name, iface.localname) - # assign position callback, when enabled - if self.position_enabled: - iface.poshook = self.position_callback - # save created bridge - self.bridges[iface.node.id] = (iface, bridge_name) - - def post_startup(self) -> None: - routes = {} - for node_id, (iface, bridge_name) in self.bridges.items(): - for onode_id, (oiface, obridge_name) in self.bridges.items(): - if node_id == onode_id: - continue - if node_id < onode_id: - node1, node2 = iface.node, oiface.node - bridge1, bridge2 = bridge_name, obridge_name - else: - node1, node2 = oiface.node, iface.node - bridge1, bridge2 = obridge_name, bridge_name - key = (node1.id, node2.id) - if key in self.links: - continue - # create node to node link - name1 = self.add_device() - name2 = self.add_device() - link_iface = CoreInterface(0, name1, name2, self.session.use_ovs()) - link_iface.startup() - link = WirelessLink(bridge1, bridge2, link_iface, False) - self.links[key] = link - # track bridge routes - node1_routes = routes.setdefault(node1.id, set()) - node1_routes.add(name1) - node2_routes = routes.setdefault(node2.id, set()) - node2_routes.add(name2) - if self.position_enabled: - link.linked = True - # assign ifaces to respective bridges - self.net_client.set_iface_master(bridge1, link_iface.name) - self.net_client.set_iface_master(bridge2, link_iface.localname) - # calculate link data - self.calc_link(iface, oiface) - for node_id, ifaces in routes.items(): - iface, bridge_name = self.bridges[node_id] - ifaces = ",".join(ifaces) - # out routes - self.host_cmd( - f"{NFTABLES} " - f'"add rule bridge {bridge_name} forward ' - f"iif {iface.localname} oif {{{ifaces}}} " - f'accept"' - ) - # in routes - self.host_cmd( - f"{NFTABLES} " - f'"add rule bridge {bridge_name} forward ' - f"iif {{{ifaces}}} oif {iface.localname} " - f'accept"' - ) - - def link_control(self, node1_id: int, node2_id: int, linked: bool) -> None: - key = get_key(node1_id, node2_id) - link = self.links.get(key) - if not link: - raise CoreError(f"invalid node links node1({node1_id}) node2({node2_id})") - bridge1, bridge2 = link.bridge1, link.bridge2 - iface = link.iface - if not link.linked and linked: - link.linked = True - self.net_client.set_iface_master(bridge1, iface.name) - self.net_client.set_iface_master(bridge2, iface.localname) - self.send_link(key[0], key[1], MessageFlags.ADD, link.label) - elif link.linked and not linked: - link.linked = False - self.net_client.delete_iface(bridge1, iface.name) - self.net_client.delete_iface(bridge2, iface.localname) - self.send_link(key[0], key[1], MessageFlags.DELETE, link.label) - - def link_config( - self, node1_id: int, node2_id: int, options1: LinkOptions, options2: LinkOptions - ) -> None: - key = get_key(node1_id, node2_id) - link = self.links.get(key) - if not link: - raise CoreError(f"invalid node links node1({node1_id}) node2({node2_id})") - iface = link.iface - has_netem = iface.has_netem - iface.options.update(options1) - iface.set_config() - name, localname = iface.name, iface.localname - iface.name, iface.localname = localname, name - iface.options.update(options2) - iface.has_netem = has_netem - iface.set_config() - iface.name, iface.localname = name, localname - if options1 == options2: - link.label = f"{options1.loss:.2f}%/{options1.delay}us" - else: - link.label = ( - f"({options1.loss:.2f}%/{options1.delay}us) " - f"({options2.loss:.2f}%/{options2.delay}us)" - ) - self.send_link(key[0], key[1], MessageFlags.NONE, link.label) - - def send_link( - self, - node1_id: int, - node2_id: int, - message_type: MessageFlags, - label: str = None, - ) -> None: - """ - Broadcasts out a wireless link/unlink message. - - :param node1_id: first node in link - :param node2_id: second node in link - :param message_type: type of link message to send - :param label: label to display for link - :return: nothing - """ - color = self.session.get_link_color(self.id) - link_data = LinkData( - message_type=message_type, - type=LinkTypes.WIRELESS, - node1_id=node1_id, - node2_id=node2_id, - network_id=self.id, - color=color, - label=label, - ) - self.session.broadcast_link(link_data) - - def position_callback(self, iface: CoreInterface) -> None: - for oiface, bridge_name in self.bridges.values(): - if iface == oiface: - continue - self.calc_link(iface, oiface) - - def calc_link(self, iface1: CoreInterface, iface2: CoreInterface) -> None: - key = get_key(iface1.node.id, iface2.node.id) - link = self.links.get(key) - point1 = iface1.node.position.get() - point2 = iface2.node.position.get() - distance = calc_distance(point1, point2) - if distance >= self.max_range: - if link.linked: - self.link_control(iface1.node.id, iface2.node.id, False) - else: - if not link.linked: - self.link_control(iface1.node.id, iface2.node.id, True) - loss_distance = max(distance - self.loss_range, 0.0) - max_distance = max(self.max_range - self.loss_range, 0.0) - loss = min((loss_distance / max_distance) * 100.0 * self.loss_factor, 100.0) - loss = max(self.loss_initial, loss) - options = LinkOptions( - loss=loss, - delay=self.delay, - bandwidth=self.bandwidth, - jitter=self.jitter, - ) - self.link_config(iface1.node.id, iface2.node.id, options, options) - - def adopt_iface(self, iface: CoreInterface, name: str) -> None: - raise CoreError(f"{type(self)} does not support adopt interface") - - def get_config(self) -> dict[str, Configuration]: - config = {x.id: x for x in copy.copy(self.options)} - config[KEY_ENABLED].default = "1" if self.position_enabled else "0" - config[KEY_RANGE].default = str(self.max_range) - config[KEY_LOSS_RANGE].default = str(self.loss_range) - config[KEY_LOSS_FACTOR].default = str(self.loss_factor) - config[KEY_LOSS].default = str(self.loss_initial) - config[KEY_BANDWIDTH].default = str(self.bandwidth) - config[KEY_DELAY].default = str(self.delay) - config[KEY_JITTER].default = str(self.jitter) - return config - - def set_config(self, config: dict[str, str]) -> None: - logger.info("wireless config: %s", config) - self.position_enabled = config[KEY_ENABLED] == "1" - self.max_range = float(config[KEY_RANGE]) - self.loss_range = float(config[KEY_LOSS_RANGE]) - self.loss_factor = float(config[KEY_LOSS_FACTOR]) - self.loss_initial = float(config[KEY_LOSS]) - self.bandwidth = int(config[KEY_BANDWIDTH]) - self.delay = int(config[KEY_DELAY]) - self.jitter = int(config[KEY_JITTER]) diff --git a/daemon/core/player.py b/daemon/core/player.py deleted file mode 100644 index d06e7b97..00000000 --- a/daemon/core/player.py +++ /dev/null @@ -1,450 +0,0 @@ -import ast -import csv -import enum -import logging -import sched -from pathlib import Path -from threading import Thread -from typing import IO, Callable, Optional - -import grpc - -from core.api.grpc.client import CoreGrpcClient, MoveNodesStreamer -from core.api.grpc.wrappers import LinkOptions - -logger = logging.getLogger(__name__) - - -@enum.unique -class PlayerEvents(enum.Enum): - """ - Provides event types for processing file events. - """ - - XY = enum.auto() - GEO = enum.auto() - CMD = enum.auto() - WLINK = enum.auto() - WILINK = enum.auto() - WICONFIG = enum.auto() - - @classmethod - def get(cls, value: str) -> Optional["PlayerEvents"]: - """ - Retrieves a valid event type from read input. - - :param value: value to get event type for - :return: valid event type, None otherwise - """ - event = None - try: - event = cls[value] - except KeyError: - pass - return event - - -class CorePlayerWriter: - """ - Provides conveniences for programatically creating a core file for playback. - """ - - def __init__(self, file_path: str): - """ - Create a CorePlayerWriter instance. - - :param file_path: path to create core file - """ - self._time: float = 0.0 - self._file_path: str = file_path - self._file: Optional[IO] = None - self._csv_file: Optional[csv.writer] = None - - def open(self) -> None: - """ - Opens the provided file path for writing and csv creation. - - :return: nothing - """ - logger.info("core player write file(%s)", self._file_path) - self._file = open(self._file_path, "w", newline="") - self._csv_file = csv.writer(self._file, quoting=csv.QUOTE_MINIMAL) - - def close(self) -> None: - """ - Closes the file being written to. - - :return: nothing - """ - if self._file: - self._file.close() - - def update(self, delay: float) -> None: - """ - Update and move the current play time forward by delay amount. - - :param delay: amount to move time forward by - :return: nothing - """ - self._time += delay - - def write_xy(self, node_id: int, x: float, y: float) -> None: - """ - Write a node xy movement event. - - :param node_id: id of node to move - :param x: x position - :param y: y position - :return: nothing - """ - self._csv_file.writerow([self._time, PlayerEvents.XY.name, node_id, x, y]) - - def write_geo(self, node_id: int, lon: float, lat: float, alt: float) -> None: - """ - Write a node geo movement event. - - :param node_id: id of node to move - :param lon: longitude position - :param lat: latitude position - :param alt: altitude position - :return: nothing - """ - self._csv_file.writerow( - [self._time, PlayerEvents.GEO.name, node_id, lon, lat, alt] - ) - - def write_cmd(self, node_id: int, wait: bool, shell: bool, cmd: str) -> None: - """ - Write a node command event. - - :param node_id: id of node to run command on - :param wait: should command wait for successful execution - :param shell: should command run under shell context - :param cmd: command to run - :return: nothing - """ - self._csv_file.writerow( - [self._time, PlayerEvents.CMD.name, node_id, wait, shell, f"'{cmd}'"] - ) - - def write_wlan_link( - self, wireless_id: int, node1_id: int, node2_id: int, linked: bool - ) -> None: - """ - Write a wlan link event. - - :param wireless_id: id of wlan network for link - :param node1_id: first node connected to wlan - :param node2_id: second node connected to wlan - :param linked: True if nodes are linked, False otherwise - :return: nothing - """ - self._csv_file.writerow( - [ - self._time, - PlayerEvents.WLINK.name, - wireless_id, - node1_id, - node2_id, - linked, - ] - ) - - def write_wireless_link( - self, wireless_id: int, node1_id: int, node2_id: int, linked: bool - ) -> None: - """ - Write a wireless link event. - - :param wireless_id: id of wireless network for link - :param node1_id: first node connected to wireless - :param node2_id: second node connected to wireless - :param linked: True if nodes are linked, False otherwise - :return: nothing - """ - self._csv_file.writerow( - [ - self._time, - PlayerEvents.WILINK.name, - wireless_id, - node1_id, - node2_id, - linked, - ] - ) - - def write_wireless_config( - self, - wireless_id: int, - node1_id: int, - node2_id: int, - loss1: float, - delay1: int, - loss2: float = None, - delay2: float = None, - ) -> None: - """ - Write a wireless link config event. - - :param wireless_id: id of wireless network for link - :param node1_id: first node connected to wireless - :param node2_id: second node connected to wireless - :param loss1: loss for the first interface - :param delay1: delay for the first interface - :param loss2: loss for the second interface, defaults to first interface loss - :param delay2: delay for second interface, defaults to first interface delay - :return: nothing - """ - loss2 = loss2 if loss2 is not None else loss1 - delay2 = delay2 if delay2 is not None else delay1 - self._csv_file.writerow( - [ - self._time, - PlayerEvents.WICONFIG.name, - wireless_id, - node1_id, - node2_id, - loss1, - delay1, - loss2, - delay2, - ] - ) - - -class CorePlayer: - """ - Provides core player functionality for reading a file with timed events - and playing them out. - """ - - def __init__(self, file_path: Path): - """ - Creates a CorePlayer instance. - - :param file_path: file to play path - """ - self.file_path: Path = file_path - self.core: CoreGrpcClient = CoreGrpcClient() - self.session_id: Optional[int] = None - self.node_streamer: Optional[MoveNodesStreamer] = None - self.node_streamer_thread: Optional[Thread] = None - self.scheduler: sched.scheduler = sched.scheduler() - self.handlers: dict[PlayerEvents, Callable] = { - PlayerEvents.XY: self.handle_xy, - PlayerEvents.GEO: self.handle_geo, - PlayerEvents.CMD: self.handle_cmd, - PlayerEvents.WLINK: self.handle_wlink, - PlayerEvents.WILINK: self.handle_wireless_link, - PlayerEvents.WICONFIG: self.handle_wireless_config, - } - - def init(self, session_id: Optional[int]) -> bool: - """ - Initialize core connections, settings to or retrieving session to use. - Also setup node streamer for xy/geo movements. - - :param session_id: session id to use, None for default session - :return: True if init was successful, False otherwise - """ - self.core.connect() - try: - if session_id is None: - sessions = self.core.get_sessions() - if len(sessions): - session_id = sessions[0].id - if session_id is None: - logger.error("no core sessions found") - return False - self.session_id = session_id - logger.info("playing to session(%s)", self.session_id) - self.node_streamer = MoveNodesStreamer(self.session_id) - self.node_streamer_thread = Thread( - target=self.core.move_nodes, args=(self.node_streamer,), daemon=True - ) - self.node_streamer_thread.start() - except grpc.RpcError as e: - logger.error("core is not running: %s", e.details()) - return False - return True - - def start(self) -> None: - """ - Starts playing file, reading the csv data line by line, then handling - each line event type. Delay is tracked and calculated, while processing, - to ensure we wait for the event time to be active. - - :return: nothing - """ - current_time = 0.0 - with self.file_path.open("r", newline="") as f: - for row in csv.reader(f): - # determine delay - input_time = float(row[0]) - delay = input_time - current_time - current_time = input_time - # determine event - event_value = row[1] - event = PlayerEvents.get(event_value) - if not event: - logger.error("unknown event type: %s", ",".join(row)) - continue - # get args and event functions - args = tuple(ast.literal_eval(x) for x in row[2:]) - event_func = self.handlers.get(event) - if not event_func: - logger.error("unknown event type handler: %s", ",".join(row)) - continue - logger.info( - "processing line time(%s) event(%s) args(%s)", - input_time, - event.name, - args, - ) - # schedule and run event - self.scheduler.enter(delay, 1, event_func, argument=args) - self.scheduler.run() - self.stop() - - def stop(self) -> None: - """ - Stop and cleanup playback. - - :return: nothing - """ - logger.info("stopping playback, cleaning up") - self.node_streamer.stop() - self.node_streamer_thread.join() - self.node_streamer_thread = None - - def handle_xy(self, node_id: int, x: float, y: float) -> None: - """ - Handle node xy movement event. - - :param node_id: id of node to move - :param x: x position - :param y: y position - :return: nothing - """ - logger.debug("handling xy node(%s) x(%s) y(%s)", node_id, x, y) - self.node_streamer.send_position(node_id, x, y) - - def handle_geo(self, node_id: int, lon: float, lat: float, alt: float) -> None: - """ - Handle node geo movement event. - - :param node_id: id of node to move - :param lon: longitude position - :param lat: latitude position - :param alt: altitude position - :return: nothing - """ - logger.debug( - "handling geo node(%s) lon(%s) lat(%s) alt(%s)", node_id, lon, lat, alt - ) - self.node_streamer.send_geo(node_id, lon, lat, alt) - - def handle_cmd(self, node_id: int, wait: bool, shell: bool, cmd: str) -> None: - """ - Handle node command event. - - :param node_id: id of node to run command - :param wait: True to wait for successful command, False otherwise - :param shell: True to run command in shell context, False otherwise - :param cmd: command to run - :return: nothing - """ - logger.debug( - "handling cmd node(%s) wait(%s) shell(%s) cmd(%s)", - node_id, - wait, - shell, - cmd, - ) - status, output = self.core.node_command( - self.session_id, node_id, cmd, wait, shell - ) - logger.info("cmd result(%s): %s", status, output) - - def handle_wlink( - self, net_id: int, node1_id: int, node2_id: int, linked: bool - ) -> None: - """ - Handle wlan link event. - - :param net_id: id of wlan network - :param node1_id: first node in link - :param node2_id: second node in link - :param linked: True if linked, Flase otherwise - :return: nothing - """ - logger.debug( - "handling wlink node1(%s) node2(%s) net(%s) linked(%s)", - node1_id, - node2_id, - net_id, - linked, - ) - self.core.wlan_link(self.session_id, net_id, node1_id, node2_id, linked) - - def handle_wireless_link( - self, wireless_id: int, node1_id: int, node2_id: int, linked: bool - ) -> None: - """ - Handle wireless link event. - - :param wireless_id: id of wireless network - :param node1_id: first node in link - :param node2_id: second node in link - :param linked: True if linked, Flase otherwise - :return: nothing - """ - logger.debug( - "handling link wireless(%s) node1(%s) node2(%s) linked(%s)", - wireless_id, - node1_id, - node2_id, - linked, - ) - self.core.wireless_linked( - self.session_id, wireless_id, node1_id, node2_id, linked - ) - - def handle_wireless_config( - self, - wireless_id: int, - node1_id: int, - node2_id: int, - loss1: float, - delay1: int, - loss2: float, - delay2: int, - ) -> None: - """ - Handle wireless config event. - - :param wireless_id: id of wireless network - :param node1_id: first node in link - :param node2_id: second node in link - :param loss1: first interface loss - :param delay1: first interface delay - :param loss2: second interface loss - :param delay2: second interface delay - :return: nothing - """ - logger.debug( - "handling config wireless(%s) node1(%s) node2(%s) " - "options1(%s/%s) options2(%s/%s)", - wireless_id, - node1_id, - node2_id, - loss1, - delay1, - loss2, - delay2, - ) - options1 = LinkOptions(loss=loss1, delay=delay1) - options2 = LinkOptions(loss=loss2, delay=delay2) - self.core.wireless_config( - self.session_id, wireless_id, node1_id, node2_id, options1, options2 - ) diff --git a/daemon/core/plugins/sdt.py b/daemon/core/plugins/sdt.py index f963c817..4d56f1a9 100644 --- a/daemon/core/plugins/sdt.py +++ b/daemon/core/plugins/sdt.py @@ -4,46 +4,20 @@ sdt.py: Scripted Display Tool (SDT3D) helper import logging import socket -from pathlib import Path -from typing import TYPE_CHECKING, Optional +from typing import IO, TYPE_CHECKING, Dict, Optional, Set, Tuple from urllib.parse import urlparse -from core.constants import CORE_CONF_DIR +from core.constants import CORE_CONF_DIR, CORE_DATA_DIR from core.emane.nodes import EmaneNet from core.emulator.data import LinkData, NodeData from core.emulator.enumerations import EventTypes, MessageFlags from core.errors import CoreError -from core.nodes.base import CoreNode, NodeBase -from core.nodes.network import HubNode, SwitchNode, TunnelNode, WlanNode -from core.nodes.physical import Rj45Node -from core.nodes.wireless import WirelessNode - -logger = logging.getLogger(__name__) +from core.nodes.base import CoreNetworkBase, NodeBase +from core.nodes.network import WlanNode if TYPE_CHECKING: from core.emulator.session import Session -LOCAL_ICONS_PATH: Path = Path(__file__).parent.parent / "gui" / "data" / "icons" -CORE_LAYER: str = "CORE" -NODE_LAYER: str = "CORE::Nodes" -LINK_LAYER: str = "CORE::Links" -WIRED_LINK_LAYER: str = f"{LINK_LAYER}::wired" -CORE_LAYERS: list[str] = [CORE_LAYER, LINK_LAYER, NODE_LAYER, WIRED_LINK_LAYER] -DEFAULT_LINK_COLOR: str = "red" -NODE_TYPES: dict[type[NodeBase], str] = { - HubNode: "hub", - SwitchNode: "lanswitch", - TunnelNode: "tunnel", - WlanNode: "wlan", - EmaneNet: "emane", - WirelessNode: "wireless", - Rj45Node: "rj45", -} - - -def is_wireless(node: NodeBase) -> bool: - return isinstance(node, (WlanNode, EmaneNet, WirelessNode)) - def get_link_id(node1_id: int, node2_id: int, network_id: int) -> str: link_id = f"{node1_id}-{node2_id}" @@ -52,6 +26,13 @@ def get_link_id(node1_id: int, node2_id: int, network_id: int) -> str: return link_id +CORE_LAYER = "CORE" +NODE_LAYER = "CORE::Nodes" +LINK_LAYER = "CORE::Links" +CORE_LAYERS = [CORE_LAYER, LINK_LAYER, NODE_LAYER] +DEFAULT_LINK_COLOR = "red" + + class Sdt: """ Helper class for exporting session objects to NRL"s SDT3D. @@ -63,19 +44,17 @@ class Sdt: # default altitude (in meters) for flyto view DEFAULT_ALT: int = 2500 # TODO: read in user"s nodes.conf here; below are default node types from the GUI - DEFAULT_SPRITES: dict[str, str] = [ - ("router", "router.png"), - ("host", "host.png"), - ("PC", "pc.png"), - ("mdr", "mdr.png"), - ("prouter", "prouter.png"), - ("hub", "hub.png"), - ("lanswitch", "lanswitch.png"), - ("wlan", "wlan.png"), - ("emane", "emane.png"), - ("wireless", "wireless.png"), - ("rj45", "rj45.png"), - ("tunnel", "tunnel.png"), + DEFAULT_SPRITES: Dict[str, str] = [ + ("router", "router.gif"), + ("host", "host.gif"), + ("PC", "pc.gif"), + ("mdr", "mdr.gif"), + ("prouter", "router_green.gif"), + ("hub", "hub.gif"), + ("lanswitch", "lanswitch.gif"), + ("wlan", "wlan.gif"), + ("rj45", "rj45.gif"), + ("tunnel", "tunnel.gif"), ] def __init__(self, session: "Session") -> None: @@ -85,12 +64,12 @@ class Sdt: :param session: session this manager is tied to """ self.session: "Session" = session - self.sock: Optional[socket.socket] = None + self.sock: Optional[IO] = None self.connected: bool = False self.url: str = self.DEFAULT_SDT_URL - self.address: Optional[tuple[Optional[str], Optional[int]]] = None + self.address: Optional[Tuple[Optional[str], Optional[int]]] = None self.protocol: Optional[str] = None - self.network_layers: set[str] = set() + self.network_layers: Set[str] = set() self.session.node_handlers.append(self.handle_node_update) self.session.link_handlers.append(self.handle_link_update) @@ -101,7 +80,7 @@ class Sdt: :return: True if enabled, False otherwise """ - return self.session.options.get_int("enablesdt") == 1 + return self.session.options.get_config("enablesdt") == "1" def seturl(self) -> None: """ @@ -110,7 +89,7 @@ class Sdt: :return: nothing """ - url = self.session.options.get("stdurl", self.DEFAULT_SDT_URL) + url = self.session.options.get_config("stdurl", default=self.DEFAULT_SDT_URL) self.url = urlparse(url) self.address = (self.url.hostname, self.url.port) self.protocol = self.url.scheme @@ -129,7 +108,7 @@ class Sdt: return False self.seturl() - logger.info("connecting to SDT at %s://%s", self.protocol, self.address) + logging.info("connecting to SDT at %s://%s", self.protocol, self.address) if self.sock is None: try: if self.protocol.lower() == "udp": @@ -138,8 +117,8 @@ class Sdt: else: # Default to tcp self.sock = socket.create_connection(self.address, 5) - except OSError: - logger.exception("SDT socket connect error") + except IOError: + logging.exception("SDT socket connect error") return False if not self.initialize(): @@ -158,7 +137,7 @@ class Sdt: :return: initialize command status """ - if not self.cmd(f'path "{LOCAL_ICONS_PATH.absolute()}"'): + if not self.cmd(f'path "{CORE_DATA_DIR}/icons/normal"'): return False # send node type to icon mappings for node_type, icon in self.DEFAULT_SPRITES: @@ -176,10 +155,11 @@ class Sdt: if self.sock: try: self.sock.close() - except OSError: - logger.error("error closing socket") + except IOError: + logging.error("error closing socket") finally: self.sock = None + self.connected = False def shutdown(self) -> None: @@ -207,13 +187,14 @@ class Sdt: """ if self.sock is None: return False + try: cmd = f"{cmdstr}\n".encode() - logger.debug("sdt cmd: %s", cmd) + logging.debug("sdt cmd: %s", cmd) self.sock.sendall(cmd) return True - except OSError: - logger.exception("SDT connection error") + except IOError: + logging.exception("SDT connection error") self.sock = None self.connected = False return False @@ -226,23 +207,26 @@ class Sdt: :return: nothing """ + nets = [] + # create layers for layer in CORE_LAYERS: self.cmd(f"layer {layer}") + with self.session.nodes_lock: - nets = [] - for node in self.session.nodes.values(): - if isinstance(node, (EmaneNet, WlanNode)): + for node_id in self.session.nodes: + node = self.session.nodes[node_id] + if isinstance(node, CoreNetworkBase): nets.append(node) if not isinstance(node, NodeBase): continue self.add_node(node) - for link in self.session.link_manager.links(): - if is_wireless(link.node1) or is_wireless(link.node2): - continue - link_data = link.get_data(MessageFlags.ADD) - self.handle_link_update(link_data) + for net in nets: - for link_data in net.links(MessageFlags.ADD): + all_links = net.links(flags=MessageFlags.ADD) + for link_data in all_links: + is_wireless = isinstance(net, (WlanNode, EmaneNet)) + if is_wireless and link_data.node1_id == net.id: + continue self.handle_link_update(link_data) def get_node_position(self, node: NodeBase) -> Optional[str]: @@ -265,21 +249,20 @@ class Sdt: :param node: node to add :return: nothing """ - logger.debug("sdt add node: %s - %s", node.id, node.name) + logging.debug("sdt add node: %s - %s", node.id, node.name) if not self.connect(): return pos = self.get_node_position(node) if not pos: return - if isinstance(node, CoreNode): - node_type = node.model - else: - node_type = NODE_TYPES.get(type(node), "PC") + node_type = node.type + if node_type is None: + node_type = type(node).type icon = node.icon if icon: node_type = node.name - icon = icon.replace("$CORE_DATA_DIR", str(LOCAL_ICONS_PATH.absolute())) - icon = icon.replace("$CORE_CONF_DIR", str(CORE_CONF_DIR)) + icon = icon.replace("$CORE_DATA_DIR", CORE_DATA_DIR) + icon = icon.replace("$CORE_CONF_DIR", CORE_CONF_DIR) self.cmd(f"sprite {node_type} image {icon}") self.cmd( f'node {node.id} nodeLayer "{NODE_LAYER}" ' @@ -296,7 +279,7 @@ class Sdt: :param alt: node altitude :return: nothing """ - logger.debug("sdt update node: %s - %s", node.id, node.name) + logging.debug("sdt update node: %s - %s", node.id, node.name) if not self.connect(): return @@ -316,7 +299,7 @@ class Sdt: :param node_id: node id to delete :return: nothing """ - logger.debug("sdt delete node: %s", node_id) + logging.debug("sdt delete node: %s", node_id) if not self.connect(): return self.cmd(f"delete node,{node_id}") @@ -331,7 +314,7 @@ class Sdt: if not self.connect(): return node = node_data.node - logger.debug("sdt handle node update: %s - %s", node.id, node.name) + logging.debug("sdt handle node update: %s - %s", node.id, node.name) if node_data.message_type == MessageFlags.DELETE: self.cmd(f"delete node,{node.id}") else: @@ -340,7 +323,7 @@ class Sdt: if all([lat is not None, lon is not None, alt is not None]): pos = f"pos {lon:.6f},{lat:.6f},{alt:.6f}" self.cmd(f"node {node.id} {pos}") - elif node_data.message_type == MessageFlags.NONE: + elif node_data.message_type == 0: lat, lon, alt = self.session.location.getgeo(x, y, 0) pos = f"pos {lon:.6f},{lat:.6f},{alt:.6f}" self.cmd(f"node {node.id} {pos}") @@ -355,7 +338,7 @@ class Sdt: result = False try: node = self.session.get_node(node_id, NodeBase) - result = isinstance(node, (WlanNode, EmaneNet, WirelessNode)) + result = isinstance(node, (WlanNode, EmaneNet)) except CoreError: pass return result @@ -372,7 +355,7 @@ class Sdt: :param label: label for link :return: nothing """ - logger.debug("sdt add link: %s, %s, %s", node1_id, node2_id, network_id) + logging.debug("sdt add link: %s, %s, %s", node1_id, node2_id, network_id) if not self.connect(): return if self.wireless_net_check(node1_id) or self.wireless_net_check(node2_id): @@ -382,10 +365,13 @@ class Sdt: color = self.session.get_link_color(network_id) line = f"{color},2" link_id = get_link_id(node1_id, node2_id, network_id) + layer = LINK_LAYER if network_id: - layer = self.get_network_layer(network_id) - else: - layer = WIRED_LINK_LAYER + node = self.session.nodes.get(network_id) + if node: + network_name = node.name + layer = f"{layer}::{network_name}" + self.network_layers.add(layer) link_label = "" if label: link_label = f'linklabel on,"{label}"' @@ -394,15 +380,6 @@ class Sdt: f"{link_label}" ) - def get_network_layer(self, network_id: int) -> str: - node = self.session.nodes.get(network_id) - if node: - layer = f"{LINK_LAYER}::{node.name}" - self.network_layers.add(layer) - else: - layer = WIRED_LINK_LAYER - return layer - def delete_link(self, node1_id: int, node2_id: int, network_id: int = None) -> None: """ Handle deleting a link in SDT. @@ -412,7 +389,7 @@ class Sdt: :param network_id: network link is associated with, None otherwise :return: nothing """ - logger.debug("sdt delete link: %s, %s, %s", node1_id, node2_id, network_id) + logging.debug("sdt delete link: %s, %s, %s", node1_id, node2_id, network_id) if not self.connect(): return if self.wireless_net_check(node1_id) or self.wireless_net_check(node2_id): @@ -432,7 +409,7 @@ class Sdt: :param label: label to update :return: nothing """ - logger.debug("sdt edit link: %s, %s, %s", node1_id, node2_id, network_id) + logging.debug("sdt edit link: %s, %s, %s", node1_id, node2_id, network_id) if not self.connect(): return if self.wireless_net_check(node1_id) or self.wireless_net_check(node2_id): diff --git a/daemon/core/scripts/cleanup.py b/daemon/core/scripts/cleanup.py deleted file mode 100755 index 1ab4647e..00000000 --- a/daemon/core/scripts/cleanup.py +++ /dev/null @@ -1,105 +0,0 @@ -import argparse -import os -import subprocess -import sys -import time - - -def check_root() -> None: - if os.geteuid() != 0: - print("permission denied, run this script as root") - sys.exit(1) - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser( - description="helps cleanup lingering core processes and files", - formatter_class=argparse.ArgumentDefaultsHelpFormatter, - ) - parser.add_argument( - "-d", "--daemon", action="store_true", help="also kill core-daemon" - ) - return parser.parse_args() - - -def cleanup_daemon() -> None: - print("killing core-daemon process ... ", end="") - result = subprocess.call("pkill -9 core-daemon", shell=True) - if result: - print("not found") - else: - print("done") - - -def cleanup_nodes() -> None: - print("killing vnoded processes ... ", end="") - result = subprocess.call("pkill -KILL vnoded", shell=True) - if result: - print("none found") - else: - time.sleep(1) - print("done") - - -def cleanup_emane() -> None: - print("killing emane processes ... ", end="") - result = subprocess.call("pkill emane", shell=True) - if result: - print("none found") - else: - print("done") - - -def cleanup_sessions() -> None: - print("removing session directories ... ", end="") - result = subprocess.call("rm -rf /tmp/pycore*", shell=True) - if result: - print("none found") - else: - print("done") - - -def cleanup_interfaces() -> None: - print("cleaning up devices") - output = subprocess.check_output("ip -br link show", shell=True) - lines = output.decode().strip().split("\n") - for line in lines: - values = line.split() - name = values[0] - if ( - name.startswith("veth") - or name.startswith("beth") - or name.startswith("gt.") - or name.startswith("b.") - or name.startswith("ctrl") - ): - name = name.split("@")[0] - result = subprocess.call(f"ip link delete {name}", shell=True) - if result: - print(f"failed to remove {name}") - else: - print(f"removed {name}") - if name.startswith("b."): - result = subprocess.call( - f"nft delete table bridge {name}", - stdout=subprocess.DEVNULL, - stderr=subprocess.DEVNULL, - shell=True, - ) - if not result: - print(f"cleared nft rules for {name}") - - -def main() -> None: - check_root() - args = parse_args() - if args.daemon: - cleanup_daemon() - cleanup_nodes() - cleanup_emane() - cleanup_interfaces() - cleanup_sessions() - - -if __name__ == "__main__": - main() diff --git a/daemon/core/scripts/daemon.py b/daemon/core/scripts/daemon.py deleted file mode 100755 index 6b9caa54..00000000 --- a/daemon/core/scripts/daemon.py +++ /dev/null @@ -1,130 +0,0 @@ -""" -core-daemon: the CORE daemon is a server process that receives CORE API -messages and instantiates emulated nodes and networks within the kernel. Various -message handlers are defined and some support for sending messages. -""" - -import argparse -import logging -import os -import time -from configparser import ConfigParser -from pathlib import Path - -from core import constants -from core.api.grpc.server import CoreGrpcServer -from core.constants import CORE_CONF_DIR, COREDPY_VERSION -from core.emulator.coreemu import CoreEmu -from core.utils import load_logging_config - -logger = logging.getLogger(__name__) - - -def banner(): - """ - Output the program banner printed to the terminal or log file. - - :return: nothing - """ - logger.info("CORE daemon v.%s started %s", constants.COREDPY_VERSION, time.ctime()) - - -def cored(cfg): - """ - Start the CoreServer object and enter the server loop. - - :param dict cfg: core configuration - :return: nothing - """ - # initialize grpc api - coreemu = CoreEmu(cfg) - grpc_server = CoreGrpcServer(coreemu) - address_config = cfg["grpcaddress"] - port_config = cfg["grpcport"] - grpc_address = f"{address_config}:{port_config}" - grpc_server.listen(grpc_address) - - -def get_merged_config(filename): - """ - Return a configuration after merging config file and command-line arguments. - - :param str filename: file name to merge configuration settings with - :return: merged configuration - :rtype: dict - """ - # these are the defaults used in the config file - default_log = os.path.join(constants.CORE_CONF_DIR, "logging.conf") - default_grpc_port = "50051" - default_address = "localhost" - defaults = { - "grpcport": default_grpc_port, - "grpcaddress": default_address, - "logfile": default_log, - } - parser = argparse.ArgumentParser( - description=f"CORE daemon v.{COREDPY_VERSION} instantiates Linux network namespace nodes." - ) - parser.add_argument( - "-f", - "--configfile", - dest="configfile", - help=f"read config from specified file; default = {filename}", - ) - parser.add_argument( - "--ovs", - action="store_true", - help="enable experimental ovs mode, default is false", - ) - parser.add_argument( - "--grpc-port", - dest="grpcport", - help=f"grpc port to listen on; default {default_grpc_port}", - ) - parser.add_argument( - "--grpc-address", - dest="grpcaddress", - help=f"grpc address to listen on; default {default_address}", - ) - parser.add_argument( - "-l", "--logfile", help=f"core logging configuration; default {default_log}" - ) - # parse command line options - args = parser.parse_args() - # convert ovs to internal format - args.ovs = "1" if args.ovs else "0" - # read the config file - if args.configfile is not None: - filename = args.configfile - del args.configfile - cfg = ConfigParser(defaults) - cfg.read(filename) - section = "core-daemon" - if not cfg.has_section(section): - cfg.add_section(section) - # merge argparse with configparser - for opt in vars(args): - val = getattr(args, opt) - if val is not None: - cfg.set(section, opt, str(val)) - return dict(cfg.items(section)) - - -def main(): - """ - Main program startup. - - :return: nothing - """ - cfg = get_merged_config(f"{CORE_CONF_DIR}/core.conf") - log_config_path = Path(cfg["logfile"]) - load_logging_config(log_config_path) - banner() - try: - cored(cfg) - except KeyboardInterrupt: - logger.info("keyboard interrupt, stopping core daemon") - - -if __name__ == "__main__": - main() diff --git a/daemon/core/scripts/player.py b/daemon/core/scripts/player.py deleted file mode 100755 index 07728939..00000000 --- a/daemon/core/scripts/player.py +++ /dev/null @@ -1,51 +0,0 @@ -import argparse -import logging -import sys -from pathlib import Path - -from core.player import CorePlayer - -logger = logging.getLogger(__name__) - - -def path_type(value: str) -> Path: - file_path = Path(value) - if not file_path.is_file(): - raise argparse.ArgumentTypeError(f"file does not exist: {value}") - return file_path - - -def parse_args() -> argparse.Namespace: - """ - Setup and parse command line arguments. - - :return: parsed arguments - """ - parser = argparse.ArgumentParser( - description="core player runs files that can move nodes and send commands", - formatter_class=argparse.ArgumentDefaultsHelpFormatter, - ) - parser.add_argument( - "-f", "--file", required=True, type=path_type, help="core file to play" - ) - parser.add_argument( - "-s", - "--session", - type=int, - help="session to play to, first found session otherwise", - ) - return parser.parse_args() - - -def main() -> None: - logging.basicConfig(level=logging.INFO) - args = parse_args() - player = CorePlayer(args.file) - result = player.init(args.session) - if not result: - sys.exit(1) - player.start() - - -if __name__ == "__main__": - main() diff --git a/daemon/core/services/__init__.py b/daemon/core/services/__init__.py index e69de29b..94e1e9d1 100644 --- a/daemon/core/services/__init__.py +++ b/daemon/core/services/__init__.py @@ -0,0 +1,20 @@ +""" +Services + +Services available to nodes can be put in this directory. Everything listed in +__all__ is automatically loaded by the main core module. +""" +import os + +from core.services.coreservices import ServiceManager + +_PATH = os.path.abspath(os.path.dirname(__file__)) + + +def load(): + """ + Loads all services from the modules that reside under core.services. + + :return: list of services that failed to load + """ + return ServiceManager.add_services(_PATH) diff --git a/daemon/core/services/bird.py b/daemon/core/services/bird.py index c2ecc4dc..ffb177f3 100644 --- a/daemon/core/services/bird.py +++ b/daemon/core/services/bird.py @@ -1,7 +1,7 @@ """ bird.py: defines routing services provided by the BIRD Internet Routing Daemon. """ -from typing import Optional +from typing import Optional, Tuple from core.nodes.base import CoreNode from core.services.coreservices import CoreService @@ -14,12 +14,12 @@ class Bird(CoreService): name: str = "bird" group: str = "BIRD" - executables: tuple[str, ...] = ("bird",) - dirs: tuple[str, ...] = ("/etc/bird",) - configs: tuple[str, ...] = ("/etc/bird/bird.conf",) - startup: tuple[str, ...] = (f"bird -c {configs[0]}",) - shutdown: tuple[str, ...] = ("killall bird",) - validate: tuple[str, ...] = ("pidof bird",) + executables: Tuple[str, ...] = ("bird",) + dirs: Tuple[str, ...] = ("/etc/bird",) + configs: Tuple[str, ...] = ("/etc/bird/bird.conf",) + startup: Tuple[str, ...] = ("bird -c %s" % (configs[0]),) + shutdown: Tuple[str, ...] = ("killall bird",) + validate: Tuple[str, ...] = ("pidof bird",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -48,30 +48,33 @@ class Bird(CoreService): Returns configuration file text. Other services that depend on bird will have hooks that are invoked here. """ - cfg = f"""\ + cfg = """\ /* Main configuration file for BIRD. This is ony a template, * you will *need* to customize it according to your needs * Beware that only double quotes \'"\' are valid. No singles. */ -log "/var/log/{cls.name}.log" all; +log "/var/log/%s.log" all; #debug protocols all; #debug commands 2; -router id {cls.router_id(node)}; # Mandatory for IPv6, may be automatic for IPv4 +router id %s; # Mandatory for IPv6, may be automatic for IPv4 -protocol kernel {{ +protocol kernel { persist; # Don\'t remove routes on BIRD shutdown scan time 200; # Scan kernel routing table every 200 seconds export all; import all; -}} +} -protocol device {{ +protocol device { scan time 10; # Scan interfaces every 10 seconds -}} +} -""" +""" % ( + cls.name, + cls.router_id(node), + ) # generate protocol specific configurations for s in node.services: @@ -91,8 +94,8 @@ class BirdService(CoreService): name: Optional[str] = None group: str = "BIRD" - executables: tuple[str, ...] = ("bird",) - dependencies: tuple[str, ...] = ("bird",) + executables: Tuple[str, ...] = ("bird",) + dependencies: Tuple[str, ...] = ("bird",) meta: str = "The config file for this service can be found in the bird service." @classmethod @@ -108,7 +111,7 @@ class BirdService(CoreService): """ cfg = "" for iface in node.get_ifaces(control=False): - cfg += f' interface "{iface.name}";\n' + cfg += ' interface "%s";\n' % iface.name return cfg diff --git a/daemon/core/services/coreservices.py b/daemon/core/services/coreservices.py index 0eee980e..b4c33990 100644 --- a/daemon/core/services/coreservices.py +++ b/daemon/core/services/coreservices.py @@ -9,13 +9,19 @@ services. import enum import logging -import pkgutil import time -from collections.abc import Iterable -from pathlib import Path -from typing import TYPE_CHECKING, Optional, Union +from typing import ( + TYPE_CHECKING, + Dict, + Iterable, + List, + Optional, + Set, + Tuple, + Type, + Union, +) -from core import services as core_services from core import utils from core.emulator.data import FileData from core.emulator.enumerations import ExceptionLevels, MessageFlags, RegisterTlvs @@ -27,12 +33,10 @@ from core.errors import ( ) from core.nodes.base import CoreNode -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emulator.session import Session - CoreServiceType = Union["CoreService", type["CoreService"]] + CoreServiceType = Union["CoreService", Type["CoreService"]] class ServiceMode(enum.Enum): @@ -48,25 +52,25 @@ class ServiceDependencies: provided. """ - def __init__(self, services: list["CoreServiceType"]) -> None: - self.visited: set[str] = set() - self.services: dict[str, "CoreServiceType"] = {} - self.paths: dict[str, list["CoreServiceType"]] = {} - self.boot_paths: list[list["CoreServiceType"]] = [] - roots = {x.name for x in services} + def __init__(self, services: List["CoreServiceType"]) -> None: + self.visited: Set[str] = set() + self.services: Dict[str, "CoreServiceType"] = {} + self.paths: Dict[str, List["CoreServiceType"]] = {} + self.boot_paths: List[List["CoreServiceType"]] = [] + roots = set([x.name for x in services]) for service in services: self.services[service.name] = service roots -= set(service.dependencies) - self.roots: list["CoreServiceType"] = [x for x in services if x.name in roots] + self.roots: List["CoreServiceType"] = [x for x in services if x.name in roots] if services and not self.roots: raise ValueError("circular dependency is present") def _search( self, service: "CoreServiceType", - visiting: set[str] = None, - path: list[str] = None, - ) -> list["CoreServiceType"]: + visiting: Set[str] = None, + path: List[str] = None, + ) -> List["CoreServiceType"]: if service.name in self.visited: return self.paths[service.name] self.visited.add(service.name) @@ -94,21 +98,127 @@ class ServiceDependencies: self.paths[service.name] = path return path - def boot_order(self) -> list[list["CoreServiceType"]]: + def boot_order(self) -> List[List["CoreServiceType"]]: for service in self.roots: self._search(service) return self.boot_paths +class ServiceShim: + keys: List[str] = [ + "dirs", + "files", + "startidx", + "cmdup", + "cmddown", + "cmdval", + "meta", + "starttime", + ] + + @classmethod + def tovaluelist(cls, node: CoreNode, service: "CoreService") -> str: + """ + Convert service properties into a string list of key=value pairs, + separated by "|". + + :param node: node to get value list for + :param service: service to get value list for + :return: value list string + """ + start_time = 0 + start_index = 0 + valmap = [ + service.dirs, + service.configs, + start_index, + service.startup, + service.shutdown, + service.validate, + service.meta, + start_time, + ] + if not service.custom: + valmap[1] = service.get_configs(node) + valmap[3] = service.get_startup(node) + vals = ["%s=%s" % (x, y) for x, y in zip(cls.keys, valmap)] + return "|".join(vals) + + @classmethod + def fromvaluelist(cls, service: "CoreService", values: List[str]) -> None: + """ + Convert list of values into properties for this instantiated + (customized) service. + + :param service: service to get value list for + :param values: value list to set properties from + :return: nothing + """ + # TODO: support empty value? e.g. override default meta with '' + for key in cls.keys: + try: + cls.setvalue(service, key, values[cls.keys.index(key)]) + except IndexError: + # old config does not need to have new keys + logging.exception("error indexing into key") + + @classmethod + def setvalue(cls, service: "CoreService", key: str, value: str) -> None: + """ + Set values for this service. + + :param service: service to get value list for + :param key: key to set value for + :param value: value of key to set + :return: nothing + """ + if key not in cls.keys: + raise ValueError("key `%s` not in `%s`" % (key, cls.keys)) + # this handles data conversion to int, string, and tuples + if value: + if key == "startidx": + value = int(value) + elif key == "meta": + value = str(value) + else: + value = utils.make_tuple_fromstr(value, str) + + if key == "dirs": + service.dirs = value + elif key == "files": + service.configs = value + elif key == "cmdup": + service.startup = value + elif key == "cmddown": + service.shutdown = value + elif key == "cmdval": + service.validate = value + elif key == "meta": + service.meta = value + + @classmethod + def servicesfromopaque(cls, opaque: str) -> List[str]: + """ + Build a list of services from an opaque data string. + + :param opaque: opaque data string + :return: services + """ + servicesstring = opaque.split(":") + if servicesstring[0] != "service": + return [] + return servicesstring[1].split(",") + + class ServiceManager: """ Manages services available for CORE nodes to use. """ - services: dict[str, type["CoreService"]] = {} + services: Dict[str, Type["CoreService"]] = {} @classmethod - def add(cls, service: type["CoreService"]) -> None: + def add(cls, service: Type["CoreService"]) -> None: """ Add a service to manager. @@ -117,31 +227,31 @@ class ServiceManager: :raises ValueError: when service cannot be loaded """ name = service.name - logger.debug("loading service: class(%s) name(%s)", service.__name__, name) - # avoid services with no name - if name is None: - logger.debug("not loading class(%s) with no name", service.__name__) - return + logging.debug("loading service: class(%s) name(%s)", service.__name__, name) + # avoid duplicate services if name in cls.services: - raise ValueError(f"duplicate service being added: {name}") + raise ValueError("duplicate service being added: %s" % name) + # validate dependent executables are present for executable in service.executables: try: utils.which(executable, required=True) except CoreError as e: raise CoreError(f"service({name}): {e}") + # validate service on load succeeds try: service.on_load() except Exception as e: - logger.exception("error during service(%s) on load", service.name) + logging.exception("error during service(%s) on load", service.name) raise ValueError(e) + # make service available cls.services[name] = service @classmethod - def get(cls, name: str) -> type["CoreService"]: + def get(cls, name: str) -> Type["CoreService"]: """ Retrieve a service from the manager. @@ -154,7 +264,7 @@ class ServiceManager: return service @classmethod - def add_services(cls, path: Path) -> list[str]: + def add_services(cls, path: str) -> List[str]: """ Method for retrieving all CoreServices from a given path. @@ -166,28 +276,14 @@ class ServiceManager: for service in services: if not service.name: continue + try: cls.add(service) except (CoreError, ValueError) as e: service_errors.append(service.name) - logger.debug("not loading service(%s): %s", service.name, e) + logging.debug("not loading service(%s): %s", service.name, e) return service_errors - @classmethod - def load_locals(cls) -> list[str]: - errors = [] - for module_info in pkgutil.walk_packages( - core_services.__path__, f"{core_services.__name__}." - ): - services = utils.load_module(module_info.name, CoreService) - for service in services: - try: - cls.add(service) - except CoreError as e: - errors.append(service.name) - logger.debug("not loading service(%s): %s", service.name, e) - return errors - class CoreServices: """ @@ -209,7 +305,7 @@ class CoreServices: """ self.session: "Session" = session # dict of default services tuples, key is node type - self.default_services: dict[str, list[str]] = { + self.default_services: Dict[str, List[str]] = { "mdr": ["zebra", "OSPFv3MDR", "IPForward"], "PC": ["DefaultRoute"], "prouter": [], @@ -217,7 +313,7 @@ class CoreServices: "host": ["DefaultRoute", "SSH"], } # dict of node ids to dict of custom services by name - self.custom_services: dict[int, dict[str, "CoreService"]] = {} + self.custom_services: Dict[int, Dict[str, "CoreService"]] = {} def reset(self) -> None: """ @@ -225,6 +321,26 @@ class CoreServices: """ self.custom_services.clear() + def get_default_services(self, node_type: str) -> List[Type["CoreService"]]: + """ + Get the list of default services that should be enabled for a + node for the given node type. + + :param node_type: node type to get default services for + :return: default services + """ + logging.debug("getting default services for type: %s", node_type) + results = [] + defaults = self.default_services.get(node_type, []) + for name in defaults: + logging.debug("checking for service with service manager: %s", name) + service = ServiceManager.get(name) + if not service: + logging.warning("default service %s is unknown", name) + else: + results.append(service) + return results + def get_service( self, node_id: int, service_name: str, default_service: bool = False ) -> "CoreService": @@ -253,7 +369,7 @@ class CoreServices: :param service_name: name of service to set :return: nothing """ - logger.debug("setting custom service(%s) for node: %s", service_name, node_id) + logging.debug("setting custom service(%s) for node: %s", service_name, node_id) service = self.get_service(node_id, service_name) if not service: service_class = ServiceManager.get(service_name) @@ -264,32 +380,32 @@ class CoreServices: node_services[service.name] = service def add_services( - self, node: CoreNode, model: str, services: list[str] = None + self, node: CoreNode, node_type: str, services: List[str] = None ) -> None: """ Add services to a node. :param node: node to add services to - :param model: node model type to add services for + :param node_type: node type to add services to :param services: names of services to add to node :return: nothing """ if not services: - logger.info( - "using default services for node(%s) type(%s)", node.name, model + logging.info( + "using default services for node(%s) type(%s)", node.name, node_type ) - services = self.default_services.get(model, []) - logger.info("setting services for node(%s): %s", node.name, services) + services = self.default_services.get(node_type, []) + logging.info("setting services for node(%s): %s", node.name, services) for service_name in services: service = self.get_service(node.id, service_name, default_service=True) if not service: - logger.warning( + logging.warning( "unknown service(%s) for node(%s)", service_name, node.name ) continue node.services.append(service) - def all_configs(self) -> list[tuple[int, "CoreService"]]: + def all_configs(self) -> List[Tuple[int, "CoreService"]]: """ Return (node_id, service) tuples for all stored configs. Used when reconnecting to a session or opening XML. @@ -304,7 +420,7 @@ class CoreServices: configs.append((node_id, service)) return configs - def all_files(self, service: "CoreService") -> list[tuple[str, str]]: + def all_files(self, service: "CoreService") -> List[Tuple[str, str]]: """ Return all customized files stored with a service. Used when reconnecting to a session or opening XML. @@ -340,8 +456,8 @@ class CoreServices: if exceptions: raise CoreServiceBootError(*exceptions) - def _boot_service_path(self, node: CoreNode, boot_path: list["CoreServiceType"]): - logger.info( + def _boot_service_path(self, node: CoreNode, boot_path: List["CoreServiceType"]): + logging.info( "booting node(%s) services: %s", node.name, " -> ".join([x.name for x in boot_path]), @@ -351,7 +467,7 @@ class CoreServices: try: self.boot_service(node, service) except Exception as e: - logger.exception("exception booting service: %s", service.name) + logging.exception("exception booting service: %s", service.name) raise CoreServiceBootError(e) def boot_service(self, node: CoreNode, service: "CoreServiceType") -> None: @@ -363,7 +479,7 @@ class CoreServices: :param service: service to start :return: nothing """ - logger.info( + logging.info( "starting node(%s) service(%s) validation(%s)", node.name, service.name, @@ -372,11 +488,10 @@ class CoreServices: # create service directories for directory in service.dirs: - dir_path = Path(directory) try: - node.create_dir(dir_path) - except (CoreCommandError, CoreError) as e: - logger.warning( + node.privatedir(directory) + except (CoreCommandError, ValueError) as e: + logging.warning( "error mounting private dir '%s' for service '%s': %s", directory, service.name, @@ -391,7 +506,7 @@ class CoreServices: status = self.startup_service(node, service, wait) if status: raise CoreServiceBootError( - f"node({node.name}) service({service.name}) error during startup" + "node(%s) service(%s) error during startup" % (node.name, service.name) ) # blocking mode is finished @@ -416,17 +531,17 @@ class CoreServices: if status: raise CoreServiceBootError( - f"node({node.name}) service({service.name}) failed validation" + "node(%s) service(%s) failed validation" % (node.name, service.name) ) - def copy_service_file(self, node: CoreNode, file_path: Path, cfg: str) -> bool: + def copy_service_file(self, node: CoreNode, filename: str, cfg: str) -> bool: """ Given a configured service filename and config, determine if the config references an existing file that should be copied. Returns True for local files, False for generated. :param node: node to copy service for - :param file_path: file name for a configured service + :param filename: file name for a configured service :param cfg: configuration string :return: True if successful, False otherwise """ @@ -435,7 +550,7 @@ class CoreServices: src = src.split("\n")[0] src = utils.expand_corepath(src, node.session, node) # TODO: glob here - node.copy_file(src, file_path, mode=0o644) + node.nodefilecopy(filename, src, mode=0o644) return True return False @@ -447,21 +562,21 @@ class CoreServices: :param service: service to validate :return: service validation status """ - logger.debug("validating node(%s) service(%s)", node.name, service.name) + logging.debug("validating node(%s) service(%s)", node.name, service.name) cmds = service.validate if not service.custom: cmds = service.get_validate(node) status = 0 for cmd in cmds: - logger.debug("validating service(%s) using: %s", service.name, cmd) + logging.debug("validating service(%s) using: %s", service.name, cmd) try: node.cmd(cmd) except CoreCommandError as e: - logger.debug( + logging.debug( "node(%s) service(%s) validate failed", node.name, service.name ) - logger.debug("cmd(%s): %s", e.cmd, e.output) + logging.debug("cmd(%s): %s", e.cmd, e.output) status = -1 break @@ -496,7 +611,7 @@ class CoreServices: f"error stopping service {service.name}: {e.stderr}", node.id, ) - logger.exception("error running stop command %s", args) + logging.exception("error running stop command %s", args) status = -1 return status @@ -531,11 +646,11 @@ class CoreServices: # get the file data data = service.config_data.get(filename) if data is None: - data = service.generate_config(node, filename) + data = "%s" % service.generate_config(node, filename) else: - data = data + data = "%s" % data - filetypestr = f"service:{service.name}" + filetypestr = "service:%s" % service.name return FileData( message_type=MessageFlags.ADD, node=node.id, @@ -564,13 +679,13 @@ class CoreServices: # retrieve custom service service = self.get_service(node_id, service_name) if service is None: - logger.warning("received file name for unknown service: %s", service_name) + logging.warning("received file name for unknown service: %s", service_name) return # validate file being set is valid config_files = service.configs if file_name not in config_files: - logger.warning( + logging.warning( "received unknown file(%s) for service(%s)", file_name, service_name ) return @@ -598,7 +713,7 @@ class CoreServices: try: node.cmd(cmd, wait) except CoreCommandError: - logger.exception("error starting command") + logging.exception("error starting command") status = -1 return status @@ -614,25 +729,27 @@ class CoreServices: config_files = service.configs if not service.custom: config_files = service.get_configs(node) + for file_name in config_files: - file_path = Path(file_name) - logger.debug( + logging.debug( "generating service config custom(%s): %s", service.custom, file_name ) if service.custom: cfg = service.config_data.get(file_name) if cfg is None: cfg = service.generate_config(node, file_name) + # cfg may have a file:/// url for copying from a file try: - if self.copy_service_file(node, file_path, cfg): + if self.copy_service_file(node, file_name, cfg): continue - except OSError: - logger.exception("error copying service file: %s", file_name) + except IOError: + logging.exception("error copying service file: %s", file_name) continue else: cfg = service.generate_config(node, file_name) - node.create_file(file_path, cfg) + + node.nodefile(file_name, cfg) def service_reconfigure(self, node: CoreNode, service: "CoreService") -> None: """ @@ -645,15 +762,17 @@ class CoreServices: config_files = service.configs if not service.custom: config_files = service.get_configs(node) + for file_name in config_files: - file_path = Path(file_name) if file_name[:7] == "file:///": # TODO: implement this raise NotImplementedError + cfg = service.config_data.get(file_name) if cfg is None: cfg = service.generate_config(node, file_name) - node.create_file(file_path, cfg) + + node.nodefile(file_name, cfg) class CoreService: @@ -665,31 +784,31 @@ class CoreService: name: Optional[str] = None # executables that must exist for service to run - executables: tuple[str, ...] = () + executables: Tuple[str, ...] = () # sets service requirements that must be started prior to this service starting - dependencies: tuple[str, ...] = () + dependencies: Tuple[str, ...] = () # group string allows grouping services together group: Optional[str] = None # private, per-node directories required by this service - dirs: tuple[str, ...] = () + dirs: Tuple[str, ...] = () # config files written by this service - configs: tuple[str, ...] = () + configs: Tuple[str, ...] = () # config file data - config_data: dict[str, str] = {} + config_data: Dict[str, str] = {} # list of startup commands - startup: tuple[str, ...] = () + startup: Tuple[str, ...] = () # list of shutdown commands - shutdown: tuple[str, ...] = () + shutdown: Tuple[str, ...] = () # list of validate commands - validate: tuple[str, ...] = () + validate: Tuple[str, ...] = () # validation mode, used to determine startup success validation_mode: ServiceMode = ServiceMode.NON_BLOCKING @@ -714,7 +833,7 @@ class CoreService: configuration is used to override their default parameters. """ self.custom: bool = True - self.config_data: dict[str, str] = self.__class__.config_data.copy() + self.config_data: Dict[str, str] = self.__class__.config_data.copy() @classmethod def on_load(cls) -> None: @@ -733,7 +852,7 @@ class CoreService: return cls.configs @classmethod - def generate_config(cls, node: CoreNode, filename: str) -> str: + def generate_config(cls, node: CoreNode, filename: str) -> None: """ Generate configuration file given a node object. The filename is provided to allow for multiple config files. @@ -742,7 +861,7 @@ class CoreService: :param node: node to generate config for :param filename: file name to generate config for - :return: generated config + :return: nothing """ raise NotImplementedError diff --git a/daemon/core/services/emaneservices.py b/daemon/core/services/emaneservices.py index 43cd9af4..4fd78ec1 100644 --- a/daemon/core/services/emaneservices.py +++ b/daemon/core/services/emaneservices.py @@ -1,3 +1,5 @@ +from typing import Tuple + from core.emane.nodes import EmaneNet from core.nodes.base import CoreNode from core.services.coreservices import CoreService @@ -7,14 +9,14 @@ from core.xml import emanexml class EmaneTransportService(CoreService): name: str = "transportd" group: str = "EMANE" - executables: tuple[str, ...] = ("emanetransportd", "emanegentransportxml") - dependencies: tuple[str, ...] = () - dirs: tuple[str, ...] = () - configs: tuple[str, ...] = ("emanetransport.sh",) - startup: tuple[str, ...] = (f"bash {configs[0]}",) - validate: tuple[str, ...] = (f"pidof {executables[0]}",) + executables: Tuple[str, ...] = ("emanetransportd", "emanegentransportxml") + dependencies: Tuple[str, ...] = () + dirs: Tuple[str, ...] = () + configs: Tuple[str, ...] = ("emanetransport.sh",) + startup: Tuple[str, ...] = (f"bash {configs[0]}",) + validate: Tuple[str, ...] = (f"pidof {executables[0]}",) validation_timer: float = 0.5 - shutdown: tuple[str, ...] = (f"killall {executables[0]}",) + shutdown: Tuple[str, ...] = (f"killall {executables[0]}",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: diff --git a/daemon/core/services/frr.py b/daemon/core/services/frr.py index 28756c19..0677a1d8 100644 --- a/daemon/core/services/frr.py +++ b/daemon/core/services/frr.py @@ -2,44 +2,33 @@ frr.py: defines routing services provided by FRRouting. Assumes installation of FRR via https://deb.frrouting.org/ """ -from typing import Optional +from typing import Optional, Tuple import netaddr from core.emane.nodes import EmaneNet -from core.nodes.base import CoreNode, NodeBase +from core.nodes.base import CoreNode from core.nodes.interface import DEFAULT_MTU, CoreInterface from core.nodes.network import PtpNet, WlanNode from core.nodes.physical import Rj45Node -from core.nodes.wireless import WirelessNode from core.services.coreservices import CoreService FRR_STATE_DIR: str = "/var/run/frr" -def is_wireless(node: NodeBase) -> bool: - """ - Check if the node is a wireless type node. - - :param node: node to check type for - :return: True if wireless type, False otherwise - """ - return isinstance(node, (WlanNode, EmaneNet, WirelessNode)) - - class FRRZebra(CoreService): name: str = "FRRzebra" group: str = "FRR" - dirs: tuple[str, ...] = ("/usr/local/etc/frr", "/var/run/frr", "/var/log/frr") - configs: tuple[str, ...] = ( + dirs: Tuple[str, ...] = ("/usr/local/etc/frr", "/var/run/frr", "/var/log/frr") + configs: Tuple[str, ...] = ( "/usr/local/etc/frr/frr.conf", "frrboot.sh", "/usr/local/etc/frr/vtysh.conf", "/usr/local/etc/frr/daemons", ) - startup: tuple[str, ...] = ("bash frrboot.sh zebra",) - shutdown: tuple[str, ...] = ("killall zebra",) - validate: tuple[str, ...] = ("pidof zebra",) + startup: Tuple[str, ...] = ("bash frrboot.sh zebra",) + shutdown: Tuple[str, ...] = ("killall zebra",) + validate: Tuple[str, ...] = ("pidof zebra",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -75,9 +64,9 @@ class FRRZebra(CoreService): # we could verify here that filename == frr.conf cfg = "" for iface in node.get_ifaces(): - cfg += f"interface {iface.name}\n" + cfg += "interface %s\n" % iface.name # include control interfaces in addressing but not routing daemons - if iface.control: + if hasattr(iface, "control") and iface.control is True: cfg += " " cfg += "\n ".join(map(cls.addrstr, iface.ips())) cfg += "\n" @@ -127,34 +116,33 @@ class FRRZebra(CoreService): """ address = str(ip.ip) if netaddr.valid_ipv4(address): - return f"ip address {ip}" + return "ip address %s" % ip elif netaddr.valid_ipv6(address): - return f"ipv6 address {ip}" + return "ipv6 address %s" % ip else: - raise ValueError(f"invalid address: {ip}") + raise ValueError("invalid address: %s", ip) @classmethod def generate_frr_boot(cls, node: CoreNode) -> str: """ Generate a shell script used to boot the FRR daemons. """ - frr_bin_search = node.session.options.get( - "frr_bin_search", '"/usr/local/bin /usr/bin /usr/lib/frr"' + frr_bin_search = node.session.options.get_config( + "frr_bin_search", default='"/usr/local/bin /usr/bin /usr/lib/frr"' ) - frr_sbin_search = node.session.options.get( - "frr_sbin_search", - '"/usr/local/sbin /usr/sbin /usr/lib/frr /usr/libexec/frr"', + frr_sbin_search = node.session.options.get_config( + "frr_sbin_search", default='"/usr/local/sbin /usr/sbin /usr/lib/frr"' ) - cfg = f"""\ + cfg = """\ #!/bin/sh # auto-generated by zebra service (frr.py) -FRR_CONF={cls.configs[0]} -FRR_SBIN_SEARCH={frr_sbin_search} -FRR_BIN_SEARCH={frr_bin_search} -FRR_STATE_DIR={FRR_STATE_DIR} +FRR_CONF=%s +FRR_SBIN_SEARCH=%s +FRR_BIN_SEARCH=%s +FRR_STATE_DIR=%s searchforprog() -{{ +{ prog=$1 searchpath=$@ ret= @@ -165,10 +153,10 @@ searchforprog() fi done echo $ret -}} +} confcheck() -{{ +{ CONF_DIR=`dirname $FRR_CONF` # if /etc/frr exists, point /etc/frr/frr.conf -> CONF_DIR if [ "$CONF_DIR" != "/etc/frr" ] && [ -d /etc/frr ] && [ ! -e /etc/frr/frr.conf ]; then @@ -178,10 +166,10 @@ confcheck() if [ "$CONF_DIR" != "/etc/frr" ] && [ -d /etc/frr ] && [ ! -e /etc/frr/vtysh.conf ]; then ln -s $CONF_DIR/vtysh.conf /etc/frr/vtysh.conf fi -}} +} bootdaemon() -{{ +{ FRR_SBIN_DIR=$(searchforprog $1 $FRR_SBIN_SEARCH) if [ "z$FRR_SBIN_DIR" = "z" ]; then echo "ERROR: FRR's '$1' daemon not found in search path:" @@ -196,10 +184,6 @@ bootdaemon() flags="$flags -6" fi - if [ "$1" = "ospfd" ]; then - flags="$flags --apiserver" - fi - #force FRR to use CORE generated conf file flags="$flags -d -f $FRR_CONF" $FRR_SBIN_DIR/$1 $flags @@ -208,10 +192,10 @@ bootdaemon() echo "ERROR: FRR's '$1' daemon failed to start!:" return 1 fi -}} +} bootfrr() -{{ +{ FRR_BIN_DIR=$(searchforprog 'vtysh' $FRR_BIN_SEARCH) if [ "z$FRR_BIN_DIR" = "z" ]; then echo "ERROR: FRR's 'vtysh' program not found in search path:" @@ -230,8 +214,8 @@ bootfrr() bootdaemon "staticd" fi for r in rip ripng ospf6 ospf bgp babel isis; do - if grep -q "^router \\<${{r}}\\>" $FRR_CONF; then - bootdaemon "${{r}}d" + if grep -q "^router \\<${r}\\>" $FRR_CONF; then + bootdaemon "${r}d" fi done @@ -240,7 +224,7 @@ bootfrr() fi $FRR_BIN_DIR/vtysh -b -}} +} if [ "$1" != "zebra" ]; then echo "WARNING: '$1': all FRR daemons are launched by the 'zebra' service!" @@ -249,7 +233,12 @@ fi confcheck bootfrr -""" +""" % ( + cls.configs[0], + frr_sbin_search, + frr_bin_search, + FRR_STATE_DIR, + ) for iface in node.get_ifaces(): cfg += f"ip link set dev {iface.name} down\n" cfg += "sleep 1\n" @@ -333,7 +322,7 @@ class FrrService(CoreService): name: Optional[str] = None group: str = "FRR" - dependencies: tuple[str, ...] = ("FRRzebra",) + dependencies: Tuple[str, ...] = ("FRRzebra",) meta: str = "The config file for this service can be found in the Zebra service." ipv4_routing: bool = False ipv6_routing: bool = False @@ -384,8 +373,8 @@ class FRROspfv2(FrrService): """ name: str = "FRROSPFv2" - shutdown: tuple[str, ...] = ("killall ospfd",) - validate: tuple[str, ...] = ("pidof ospfd",) + shutdown: Tuple[str, ...] = ("killall ospfd",) + validate: Tuple[str, ...] = ("pidof ospfd",) ipv4_routing: bool = True @staticmethod @@ -420,30 +409,17 @@ class FRROspfv2(FrrService): def generate_frr_config(cls, node: CoreNode) -> str: cfg = "router ospf\n" rtrid = cls.router_id(node) - cfg += f" router-id {rtrid}\n" + cfg += " router-id %s\n" % rtrid # network 10.0.0.0/24 area 0 for iface in node.get_ifaces(control=False): for ip4 in iface.ip4s: cfg += f" network {ip4} area 0\n" - cfg += " ospf opaque-lsa\n" cfg += "!\n" return cfg @classmethod def generate_frr_iface_config(cls, node: CoreNode, iface: CoreInterface) -> str: - cfg = cls.mtu_check(iface) - # external RJ45 connections will use default OSPF timers - if cls.rj45check(iface): - return cfg - cfg += cls.ptp_check(iface) - return ( - cfg - + """\ - ip ospf hello-interval 2 - ip ospf dead-interval 6 - ip ospf retransmit-interval 5 -""" - ) + return cls.mtu_check(iface) class FRROspfv3(FrrService): @@ -454,8 +430,8 @@ class FRROspfv3(FrrService): """ name: str = "FRROSPFv3" - shutdown: tuple[str, ...] = ("killall ospf6d",) - validate: tuple[str, ...] = ("pidof ospf6d",) + shutdown: Tuple[str, ...] = ("killall ospf6d",) + validate: Tuple[str, ...] = ("pidof ospf6d",) ipv4_routing: bool = True ipv6_routing: bool = True @@ -482,7 +458,7 @@ class FRROspfv3(FrrService): """ minmtu = cls.min_mtu(iface) if minmtu < iface.mtu: - return f" ipv6 ospf6 ifmtu {minmtu:d}\n" + return " ipv6 ospf6 ifmtu %d\n" % minmtu else: return "" @@ -500,15 +476,27 @@ class FRROspfv3(FrrService): def generate_frr_config(cls, node: CoreNode) -> str: cfg = "router ospf6\n" rtrid = cls.router_id(node) - cfg += f" router-id {rtrid}\n" + cfg += " router-id %s\n" % rtrid for iface in node.get_ifaces(control=False): - cfg += f" interface {iface.name} area 0.0.0.0\n" + cfg += " interface %s area 0.0.0.0\n" % iface.name cfg += "!\n" return cfg @classmethod def generate_frr_iface_config(cls, node: CoreNode, iface: CoreInterface) -> str: return cls.mtu_check(iface) + # cfg = cls.mtucheck(ifc) + # external RJ45 connections will use default OSPF timers + # if cls.rj45check(ifc): + # return cfg + # cfg += cls.ptpcheck(ifc) + # return cfg + """\ + + +# ipv6 ospf6 hello-interval 2 +# ipv6 ospf6 dead-interval 6 +# ipv6 ospf6 retransmit-interval 5 +# """ class FRRBgp(FrrService): @@ -519,8 +507,8 @@ class FRRBgp(FrrService): """ name: str = "FRRBGP" - shutdown: tuple[str, ...] = ("killall bgpd",) - validate: tuple[str, ...] = ("pidof bgpd",) + shutdown: Tuple[str, ...] = ("killall bgpd",) + validate: Tuple[str, ...] = ("pidof bgpd",) custom_needed: bool = True ipv4_routing: bool = True ipv6_routing: bool = True @@ -530,9 +518,9 @@ class FRRBgp(FrrService): cfg = "!\n! BGP configuration\n!\n" cfg += "! You should configure the AS number below,\n" cfg += "! along with this router's peers.\n!\n" - cfg += f"router bgp {node.id}\n" + cfg += "router bgp %s\n" % node.id rtrid = cls.router_id(node) - cfg += f" bgp router-id {rtrid}\n" + cfg += " bgp router-id %s\n" % rtrid cfg += " redistribute connected\n" cfg += "! neighbor 1.2.3.4 remote-as 555\n!\n" return cfg @@ -544,8 +532,8 @@ class FRRRip(FrrService): """ name: str = "FRRRIP" - shutdown: tuple[str, ...] = ("killall ripd",) - validate: tuple[str, ...] = ("pidof ripd",) + shutdown: Tuple[str, ...] = ("killall ripd",) + validate: Tuple[str, ...] = ("pidof ripd",) ipv4_routing: bool = True @classmethod @@ -567,8 +555,8 @@ class FRRRipng(FrrService): """ name: str = "FRRRIPNG" - shutdown: tuple[str, ...] = ("killall ripngd",) - validate: tuple[str, ...] = ("pidof ripngd",) + shutdown: Tuple[str, ...] = ("killall ripngd",) + validate: Tuple[str, ...] = ("pidof ripngd",) ipv6_routing: bool = True @classmethod @@ -591,21 +579,21 @@ class FRRBabel(FrrService): """ name: str = "FRRBabel" - shutdown: tuple[str, ...] = ("killall babeld",) - validate: tuple[str, ...] = ("pidof babeld",) + shutdown: Tuple[str, ...] = ("killall babeld",) + validate: Tuple[str, ...] = ("pidof babeld",) ipv6_routing: bool = True @classmethod def generate_frr_config(cls, node: CoreNode) -> str: cfg = "router babel\n" for iface in node.get_ifaces(control=False): - cfg += f" network {iface.name}\n" + cfg += " network %s\n" % iface.name cfg += " redistribute static\n redistribute ipv4 connected\n" return cfg @classmethod def generate_frr_iface_config(cls, node: CoreNode, iface: CoreInterface) -> str: - if is_wireless(iface.net): + if iface.net and isinstance(iface.net, (EmaneNet, WlanNode)): return " babel wireless\n no babel split-horizon\n" else: return " babel wired\n babel split-horizon\n" @@ -617,8 +605,8 @@ class FRRpimd(FrrService): """ name: str = "FRRpimd" - shutdown: tuple[str, ...] = ("killall pimd",) - validate: tuple[str, ...] = ("pidof pimd",) + shutdown: Tuple[str, ...] = ("killall pimd",) + validate: Tuple[str, ...] = ("pidof pimd",) ipv4_routing: bool = True @classmethod @@ -632,8 +620,8 @@ class FRRpimd(FrrService): cfg += "router igmp\n!\n" cfg += "router pim\n" cfg += " !ip pim rp-address 10.0.0.1\n" - cfg += f" ip pim bsr-candidate {ifname}\n" - cfg += f" ip pim rp-candidate {ifname}\n" + cfg += " ip pim bsr-candidate %s\n" % ifname + cfg += " ip pim rp-candidate %s\n" % ifname cfg += " !ip pim spt-threshold interval 10 bytes 80000\n" return cfg @@ -650,8 +638,8 @@ class FRRIsis(FrrService): """ name: str = "FRRISIS" - shutdown: tuple[str, ...] = ("killall isisd",) - validate: tuple[str, ...] = ("pidof isisd",) + shutdown: Tuple[str, ...] = ("killall isisd",) + validate: Tuple[str, ...] = ("pidof isisd",) ipv4_routing: bool = True ipv6_routing: bool = True @@ -668,7 +656,7 @@ class FRRIsis(FrrService): @classmethod def generate_frr_config(cls, node: CoreNode) -> str: cfg = "router isis DEFAULT\n" - cfg += f" net 47.0001.0000.1900.{node.id:04x}.00\n" + cfg += " net 47.0001.0000.1900.%04x.00\n" % node.id cfg += " metric-style wide\n" cfg += " is-type level-2-only\n" cfg += "!\n" diff --git a/daemon/core/services/nrl.py b/daemon/core/services/nrl.py index 32e19f60..91e053b2 100644 --- a/daemon/core/services/nrl.py +++ b/daemon/core/services/nrl.py @@ -2,7 +2,7 @@ nrl.py: defines services provided by NRL protolib tools hosted here: http://www.nrl.navy.mil/itd/ncs/products """ -from typing import Optional +from typing import Optional, Tuple from core import utils from core.nodes.base import CoreNode @@ -33,29 +33,29 @@ class NrlService(CoreService): ip4 = iface.get_ip4() if ip4: return f"{ip4.ip}/{prefixlen}" - return f"0.0.0.0/{prefixlen}" + return "0.0.0.0/%s" % prefixlen class MgenSinkService(NrlService): name: str = "MGEN_Sink" - executables: tuple[str, ...] = ("mgen",) - configs: tuple[str, ...] = ("sink.mgen",) - startup: tuple[str, ...] = ("mgen input sink.mgen",) - validate: tuple[str, ...] = ("pidof mgen",) - shutdown: tuple[str, ...] = ("killall mgen",) + executables: Tuple[str, ...] = ("mgen",) + configs: Tuple[str, ...] = ("sink.mgen",) + startup: Tuple[str, ...] = ("mgen input sink.mgen",) + validate: Tuple[str, ...] = ("pidof mgen",) + shutdown: Tuple[str, ...] = ("killall mgen",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: cfg = "0.0 LISTEN UDP 5000\n" for iface in node.get_ifaces(): name = utils.sysctl_devname(iface.name) - cfg += f"0.0 Join 224.225.1.2 INTERFACE {name}\n" + cfg += "0.0 Join 224.225.1.2 INTERFACE %s\n" % name return cfg @classmethod - def get_startup(cls, node: CoreNode) -> tuple[str, ...]: + def get_startup(cls, node: CoreNode) -> Tuple[str, ...]: cmd = cls.startup[0] - cmd += f" output /tmp/mgen_{node.name}.log" + cmd += " output /tmp/mgen_%s.log" % node.name return (cmd,) @@ -65,23 +65,23 @@ class NrlNhdp(NrlService): """ name: str = "NHDP" - executables: tuple[str, ...] = ("nrlnhdp",) - startup: tuple[str, ...] = ("nrlnhdp",) - shutdown: tuple[str, ...] = ("killall nrlnhdp",) - validate: tuple[str, ...] = ("pidof nrlnhdp",) + executables: Tuple[str, ...] = ("nrlnhdp",) + startup: Tuple[str, ...] = ("nrlnhdp",) + shutdown: Tuple[str, ...] = ("killall nrlnhdp",) + validate: Tuple[str, ...] = ("pidof nrlnhdp",) @classmethod - def get_startup(cls, node: CoreNode) -> tuple[str, ...]: + def get_startup(cls, node: CoreNode) -> Tuple[str, ...]: """ Generate the appropriate command-line based on node interfaces. """ cmd = cls.startup[0] cmd += " -l /var/log/nrlnhdp.log" - cmd += f" -rpipe {node.name}_nhdp" + cmd += " -rpipe %s_nhdp" % node.name servicenames = map(lambda x: x.name, node.services) if "SMF" in servicenames: cmd += " -flooding ecds" - cmd += f" -smfClient {node.name}_smf" + cmd += " -smfClient %s_smf" % node.name ifaces = node.get_ifaces(control=False) if len(ifaces) > 0: iface_names = map(lambda x: x.name, ifaces) @@ -96,11 +96,11 @@ class NrlSmf(NrlService): """ name: str = "SMF" - executables: tuple[str, ...] = ("nrlsmf",) - startup: tuple[str, ...] = ("bash startsmf.sh",) - shutdown: tuple[str, ...] = ("killall nrlsmf",) - validate: tuple[str, ...] = ("pidof nrlsmf",) - configs: tuple[str, ...] = ("startsmf.sh",) + executables: Tuple[str, ...] = ("nrlsmf",) + startup: Tuple[str, ...] = ("bash startsmf.sh",) + shutdown: Tuple[str, ...] = ("killall nrlsmf",) + validate: Tuple[str, ...] = ("pidof nrlsmf",) + configs: Tuple[str, ...] = ("startsmf.sh",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -112,12 +112,18 @@ class NrlSmf(NrlService): cfg = "#!/bin/sh\n" cfg += "# auto-generated by nrl.py:NrlSmf.generateconfig()\n" comments = "" - cmd = f"nrlsmf instance {node.name}_smf" + cmd = "nrlsmf instance %s_smf" % node.name servicenames = map(lambda x: x.name, node.services) ifaces = node.get_ifaces(control=False) if len(ifaces) == 0: return "" + + if "arouted" in servicenames: + comments += "# arouted service is enabled\n" + cmd += " tap %s_tap" % (node.name,) + cmd += " unicast %s" % cls.firstipv4prefix(node, 24) + cmd += " push lo,%s resequence on" % ifaces[0].name if len(ifaces) > 0: if "NHDP" in servicenames: comments += "# NHDP service is enabled\n" @@ -142,13 +148,13 @@ class NrlOlsr(NrlService): """ name: str = "OLSR" - executables: tuple[str, ...] = ("nrlolsrd",) - startup: tuple[str, ...] = ("nrlolsrd",) - shutdown: tuple[str, ...] = ("killall nrlolsrd",) - validate: tuple[str, ...] = ("pidof nrlolsrd",) + executables: Tuple[str, ...] = ("nrlolsrd",) + startup: Tuple[str, ...] = ("nrlolsrd",) + shutdown: Tuple[str, ...] = ("killall nrlolsrd",) + validate: Tuple[str, ...] = ("pidof nrlolsrd",) @classmethod - def get_startup(cls, node: CoreNode) -> tuple[str, ...]: + def get_startup(cls, node: CoreNode) -> Tuple[str, ...]: """ Generate the appropriate command-line based on node interfaces. """ @@ -157,13 +163,13 @@ class NrlOlsr(NrlService): ifaces = node.get_ifaces() if len(ifaces) > 0: iface = ifaces[0] - cmd += f" -i {iface.name}" + cmd += " -i %s" % iface.name cmd += " -l /var/log/nrlolsrd.log" - cmd += f" -rpipe {node.name}_olsr" + cmd += " -rpipe %s_olsr" % node.name servicenames = map(lambda x: x.name, node.services) if "SMF" in servicenames and "NHDP" not in servicenames: cmd += " -flooding s-mpr" - cmd += f" -smfClient {node.name}_smf" + cmd += " -smfClient %s_smf" % node.name if "zebra" in servicenames: cmd += " -z" return (cmd,) @@ -175,23 +181,23 @@ class NrlOlsrv2(NrlService): """ name: str = "OLSRv2" - executables: tuple[str, ...] = ("nrlolsrv2",) - startup: tuple[str, ...] = ("nrlolsrv2",) - shutdown: tuple[str, ...] = ("killall nrlolsrv2",) - validate: tuple[str, ...] = ("pidof nrlolsrv2",) + executables: Tuple[str, ...] = ("nrlolsrv2",) + startup: Tuple[str, ...] = ("nrlolsrv2",) + shutdown: Tuple[str, ...] = ("killall nrlolsrv2",) + validate: Tuple[str, ...] = ("pidof nrlolsrv2",) @classmethod - def get_startup(cls, node: CoreNode) -> tuple[str, ...]: + def get_startup(cls, node: CoreNode) -> Tuple[str, ...]: """ Generate the appropriate command-line based on node interfaces. """ cmd = cls.startup[0] cmd += " -l /var/log/nrlolsrv2.log" - cmd += f" -rpipe {node.name}_olsrv2" + cmd += " -rpipe %s_olsrv2" % node.name servicenames = map(lambda x: x.name, node.services) if "SMF" in servicenames: cmd += " -flooding ecds" - cmd += f" -smfClient {node.name}_smf" + cmd += " -smfClient %s_smf" % node.name cmd += " -p olsr" ifaces = node.get_ifaces(control=False) if len(ifaces) > 0: @@ -207,15 +213,15 @@ class OlsrOrg(NrlService): """ name: str = "OLSRORG" - executables: tuple[str, ...] = ("olsrd",) - configs: tuple[str, ...] = ("/etc/olsrd/olsrd.conf",) - dirs: tuple[str, ...] = ("/etc/olsrd",) - startup: tuple[str, ...] = ("olsrd",) - shutdown: tuple[str, ...] = ("killall olsrd",) - validate: tuple[str, ...] = ("pidof olsrd",) + executables: Tuple[str, ...] = ("olsrd",) + configs: Tuple[str, ...] = ("/etc/olsrd/olsrd.conf",) + dirs: Tuple[str, ...] = ("/etc/olsrd",) + startup: Tuple[str, ...] = ("olsrd",) + shutdown: Tuple[str, ...] = ("killall olsrd",) + validate: Tuple[str, ...] = ("pidof olsrd",) @classmethod - def get_startup(cls, node: CoreNode) -> tuple[str, ...]: + def get_startup(cls, node: CoreNode) -> Tuple[str, ...]: """ Generate the appropriate command-line based on node interfaces. """ @@ -558,11 +564,11 @@ class MgenActor(NrlService): # a unique name is required, without spaces name: str = "MgenActor" group: str = "ProtoSvc" - executables: tuple[str, ...] = ("mgen",) - configs: tuple[str, ...] = ("start_mgen_actor.sh",) - startup: tuple[str, ...] = ("bash start_mgen_actor.sh",) - validate: tuple[str, ...] = ("pidof mgen",) - shutdown: tuple[str, ...] = ("killall mgen",) + executables: Tuple[str, ...] = ("mgen",) + configs: Tuple[str, ...] = ("start_mgen_actor.sh",) + startup: Tuple[str, ...] = ("bash start_mgen_actor.sh",) + validate: Tuple[str, ...] = ("pidof mgen",) + shutdown: Tuple[str, ...] = ("killall mgen",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -574,9 +580,52 @@ class MgenActor(NrlService): cfg = "#!/bin/sh\n" cfg += "# auto-generated by nrl.py:MgenActor.generateconfig()\n" comments = "" - cmd = f"mgenBasicActor.py -n {node.name} -a 0.0.0.0" + cmd = "mgenBasicActor.py -n %s -a 0.0.0.0" % node.name ifaces = node.get_ifaces(control=False) if len(ifaces) == 0: return "" cfg += comments + cmd + " < /dev/null > /dev/null 2>&1 &\n\n" return cfg + + +class Arouted(NrlService): + """ + Adaptive Routing + """ + + name: str = "arouted" + executables: Tuple[str, ...] = ("arouted",) + configs: Tuple[str, ...] = ("startarouted.sh",) + startup: Tuple[str, ...] = ("bash startarouted.sh",) + shutdown: Tuple[str, ...] = ("pkill arouted",) + validate: Tuple[str, ...] = ("pidof arouted",) + + @classmethod + def generate_config(cls, node: CoreNode, filename: str) -> str: + """ + Return the Quagga.conf or quaggaboot.sh file contents. + """ + cfg = ( + """ +#!/bin/sh +for f in "/tmp/%s_smf"; do + count=1 + until [ -e "$f" ]; do + if [ $count -eq 10 ]; then + echo "ERROR: nrlmsf pipe not found: $f" >&2 + exit 1 + fi + sleep 0.1 + count=$(($count + 1)) + done +done + +""" + % node.name + ) + cfg += "ip route add %s dev lo\n" % cls.firstipv4prefix(node, 24) + cfg += "arouted instance %s_smf tap %s_tap" % (node.name, node.name) + # seconds to consider a new route valid + cfg += " stability 10" + cfg += " 2>&1 > /var/log/arouted.log &\n\n" + return cfg diff --git a/daemon/core/services/quagga.py b/daemon/core/services/quagga.py index b96a8eae..f47da1d0 100644 --- a/daemon/core/services/quagga.py +++ b/daemon/core/services/quagga.py @@ -1,43 +1,33 @@ """ quagga.py: defines routing services provided by Quagga. """ -from typing import Optional +from typing import Optional, Tuple import netaddr from core.emane.nodes import EmaneNet -from core.nodes.base import CoreNode, NodeBase +from core.emulator.enumerations import LinkTypes +from core.nodes.base import CoreNode from core.nodes.interface import DEFAULT_MTU, CoreInterface from core.nodes.network import PtpNet, WlanNode from core.nodes.physical import Rj45Node -from core.nodes.wireless import WirelessNode from core.services.coreservices import CoreService QUAGGA_STATE_DIR: str = "/var/run/quagga" -def is_wireless(node: NodeBase) -> bool: - """ - Check if the node is a wireless type node. - - :param node: node to check type for - :return: True if wireless type, False otherwise - """ - return isinstance(node, (WlanNode, EmaneNet, WirelessNode)) - - class Zebra(CoreService): name: str = "zebra" group: str = "Quagga" - dirs: tuple[str, ...] = ("/usr/local/etc/quagga", "/var/run/quagga") - configs: tuple[str, ...] = ( + dirs: Tuple[str, ...] = ("/usr/local/etc/quagga", "/var/run/quagga") + configs: Tuple[str, ...] = ( "/usr/local/etc/quagga/Quagga.conf", "quaggaboot.sh", "/usr/local/etc/quagga/vtysh.conf", ) - startup: tuple[str, ...] = ("bash quaggaboot.sh zebra",) - shutdown: tuple[str, ...] = ("killall zebra",) - validate: tuple[str, ...] = ("pidof zebra",) + startup: Tuple[str, ...] = ("bash quaggaboot.sh zebra",) + shutdown: Tuple[str, ...] = ("killall zebra",) + validate: Tuple[str, ...] = ("pidof zebra",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -71,9 +61,9 @@ class Zebra(CoreService): # we could verify here that filename == Quagga.conf cfg = "" for iface in node.get_ifaces(): - cfg += f"interface {iface.name}\n" + cfg += "interface %s\n" % iface.name # include control interfaces in addressing but not routing daemons - if iface.control: + if getattr(iface, "control", False): cfg += " " cfg += "\n ".join(map(cls.addrstr, iface.ips())) cfg += "\n" @@ -123,33 +113,33 @@ class Zebra(CoreService): """ address = str(ip.ip) if netaddr.valid_ipv4(address): - return f"ip address {ip}" + return "ip address %s" % ip elif netaddr.valid_ipv6(address): - return f"ipv6 address {ip}" + return "ipv6 address %s" % ip else: - raise ValueError(f"invalid address: {ip}") + raise ValueError("invalid address: %s", ip) @classmethod def generate_quagga_boot(cls, node: CoreNode) -> str: """ Generate a shell script used to boot the Quagga daemons. """ - quagga_bin_search = node.session.options.get( - "quagga_bin_search", '"/usr/local/bin /usr/bin /usr/lib/quagga"' + quagga_bin_search = node.session.options.get_config( + "quagga_bin_search", default='"/usr/local/bin /usr/bin /usr/lib/quagga"' ) - quagga_sbin_search = node.session.options.get( - "quagga_sbin_search", '"/usr/local/sbin /usr/sbin /usr/lib/quagga"' + quagga_sbin_search = node.session.options.get_config( + "quagga_sbin_search", default='"/usr/local/sbin /usr/sbin /usr/lib/quagga"' ) - return f"""\ + return """\ #!/bin/sh # auto-generated by zebra service (quagga.py) -QUAGGA_CONF={cls.configs[0]} -QUAGGA_SBIN_SEARCH={quagga_sbin_search} -QUAGGA_BIN_SEARCH={quagga_bin_search} -QUAGGA_STATE_DIR={QUAGGA_STATE_DIR} +QUAGGA_CONF=%s +QUAGGA_SBIN_SEARCH=%s +QUAGGA_BIN_SEARCH=%s +QUAGGA_STATE_DIR=%s searchforprog() -{{ +{ prog=$1 searchpath=$@ ret= @@ -160,10 +150,10 @@ searchforprog() fi done echo $ret -}} +} confcheck() -{{ +{ CONF_DIR=`dirname $QUAGGA_CONF` # if /etc/quagga exists, point /etc/quagga/Quagga.conf -> CONF_DIR if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/Quagga.conf ]; then @@ -173,10 +163,10 @@ confcheck() if [ "$CONF_DIR" != "/etc/quagga" ] && [ -d /etc/quagga ] && [ ! -e /etc/quagga/vtysh.conf ]; then ln -s $CONF_DIR/vtysh.conf /etc/quagga/vtysh.conf fi -}} +} bootdaemon() -{{ +{ QUAGGA_SBIN_DIR=$(searchforprog $1 $QUAGGA_SBIN_SEARCH) if [ "z$QUAGGA_SBIN_DIR" = "z" ]; then echo "ERROR: Quagga's '$1' daemon not found in search path:" @@ -196,10 +186,10 @@ bootdaemon() echo "ERROR: Quagga's '$1' daemon failed to start!:" return 1 fi -}} +} bootquagga() -{{ +{ QUAGGA_BIN_DIR=$(searchforprog 'vtysh' $QUAGGA_BIN_SEARCH) if [ "z$QUAGGA_BIN_DIR" = "z" ]; then echo "ERROR: Quagga's 'vtysh' program not found in search path:" @@ -215,8 +205,8 @@ bootquagga() bootdaemon "zebra" for r in rip ripng ospf6 ospf bgp babel; do - if grep -q "^router \\<${{r}}\\>" $QUAGGA_CONF; then - bootdaemon "${{r}}d" + if grep -q "^router \\<${r}\\>" $QUAGGA_CONF; then + bootdaemon "${r}d" fi done @@ -225,7 +215,7 @@ bootquagga() fi $QUAGGA_BIN_DIR/vtysh -b -}} +} if [ "$1" != "zebra" ]; then echo "WARNING: '$1': all Quagga daemons are launched by the 'zebra' service!" @@ -233,7 +223,12 @@ if [ "$1" != "zebra" ]; then fi confcheck bootquagga -""" +""" % ( + cls.configs[0], + quagga_sbin_search, + quagga_bin_search, + QUAGGA_STATE_DIR, + ) class QuaggaService(CoreService): @@ -244,7 +239,7 @@ class QuaggaService(CoreService): name: Optional[str] = None group: str = "Quagga" - dependencies: tuple[str, ...] = (Zebra.name,) + dependencies: Tuple[str, ...] = (Zebra.name,) meta: str = "The config file for this service can be found in the Zebra service." ipv4_routing: bool = False ipv6_routing: bool = False @@ -295,8 +290,8 @@ class Ospfv2(QuaggaService): """ name: str = "OSPFv2" - shutdown: tuple[str, ...] = ("killall ospfd",) - validate: tuple[str, ...] = ("pidof ospfd",) + shutdown: Tuple[str, ...] = ("killall ospfd",) + validate: Tuple[str, ...] = ("pidof ospfd",) ipv4_routing: bool = True @staticmethod @@ -331,7 +326,7 @@ class Ospfv2(QuaggaService): def generate_quagga_config(cls, node: CoreNode) -> str: cfg = "router ospf\n" rtrid = cls.router_id(node) - cfg += f" router-id {rtrid}\n" + cfg += " router-id %s\n" % rtrid # network 10.0.0.0/24 area 0 for iface in node.get_ifaces(control=False): for ip4 in iface.ip4s: @@ -364,8 +359,8 @@ class Ospfv3(QuaggaService): """ name: str = "OSPFv3" - shutdown: tuple[str, ...] = ("killall ospf6d",) - validate: tuple[str, ...] = ("pidof ospf6d",) + shutdown: Tuple[str, ...] = ("killall ospf6d",) + validate: Tuple[str, ...] = ("pidof ospf6d",) ipv4_routing: bool = True ipv6_routing: bool = True @@ -392,7 +387,7 @@ class Ospfv3(QuaggaService): """ minmtu = cls.min_mtu(iface) if minmtu < iface.mtu: - return f" ipv6 ospf6 ifmtu {minmtu:d}\n" + return " ipv6 ospf6 ifmtu %d\n" % minmtu else: return "" @@ -411,9 +406,9 @@ class Ospfv3(QuaggaService): cfg = "router ospf6\n" rtrid = cls.router_id(node) cfg += " instance-id 65\n" - cfg += f" router-id {rtrid}\n" + cfg += " router-id %s\n" % rtrid for iface in node.get_ifaces(control=False): - cfg += f" interface {iface.name} area 0.0.0.0\n" + cfg += " interface %s area 0.0.0.0\n" % iface.name cfg += "!\n" return cfg @@ -436,7 +431,7 @@ class Ospfv3mdr(Ospfv3): @classmethod def generate_quagga_iface_config(cls, node: CoreNode, iface: CoreInterface) -> str: cfg = cls.mtu_check(iface) - if is_wireless(iface.net): + if iface.net is not None and isinstance(iface.net, (WlanNode, EmaneNet)): return ( cfg + """\ @@ -461,8 +456,8 @@ class Bgp(QuaggaService): """ name: str = "BGP" - shutdown: tuple[str, ...] = ("killall bgpd",) - validate: tuple[str, ...] = ("pidof bgpd",) + shutdown: Tuple[str, ...] = ("killall bgpd",) + validate: Tuple[str, ...] = ("pidof bgpd",) custom_needed: bool = True ipv4_routing: bool = True ipv6_routing: bool = True @@ -472,9 +467,9 @@ class Bgp(QuaggaService): cfg = "!\n! BGP configuration\n!\n" cfg += "! You should configure the AS number below,\n" cfg += "! along with this router's peers.\n!\n" - cfg += f"router bgp {node.id}\n" + cfg += "router bgp %s\n" % node.id rtrid = cls.router_id(node) - cfg += f" bgp router-id {rtrid}\n" + cfg += " bgp router-id %s\n" % rtrid cfg += " redistribute connected\n" cfg += "! neighbor 1.2.3.4 remote-as 555\n!\n" return cfg @@ -486,8 +481,8 @@ class Rip(QuaggaService): """ name: str = "RIP" - shutdown: tuple[str, ...] = ("killall ripd",) - validate: tuple[str, ...] = ("pidof ripd",) + shutdown: Tuple[str, ...] = ("killall ripd",) + validate: Tuple[str, ...] = ("pidof ripd",) ipv4_routing: bool = True @classmethod @@ -509,8 +504,8 @@ class Ripng(QuaggaService): """ name: str = "RIPNG" - shutdown: tuple[str, ...] = ("killall ripngd",) - validate: tuple[str, ...] = ("pidof ripngd",) + shutdown: Tuple[str, ...] = ("killall ripngd",) + validate: Tuple[str, ...] = ("pidof ripngd",) ipv6_routing: bool = True @classmethod @@ -533,21 +528,21 @@ class Babel(QuaggaService): """ name: str = "Babel" - shutdown: tuple[str, ...] = ("killall babeld",) - validate: tuple[str, ...] = ("pidof babeld",) + shutdown: Tuple[str, ...] = ("killall babeld",) + validate: Tuple[str, ...] = ("pidof babeld",) ipv6_routing: bool = True @classmethod def generate_quagga_config(cls, node: CoreNode) -> str: cfg = "router babel\n" for iface in node.get_ifaces(control=False): - cfg += f" network {iface.name}\n" + cfg += " network %s\n" % iface.name cfg += " redistribute static\n redistribute connected\n" return cfg @classmethod def generate_quagga_iface_config(cls, node: CoreNode, iface: CoreInterface) -> str: - if is_wireless(iface.net): + if iface.net and iface.net.linktype == LinkTypes.WIRELESS: return " babel wireless\n no babel split-horizon\n" else: return " babel wired\n babel split-horizon\n" @@ -559,8 +554,8 @@ class Xpimd(QuaggaService): """ name: str = "Xpimd" - shutdown: tuple[str, ...] = ("killall xpimd",) - validate: tuple[str, ...] = ("pidof xpimd",) + shutdown: Tuple[str, ...] = ("killall xpimd",) + validate: Tuple[str, ...] = ("pidof xpimd",) ipv4_routing: bool = True @classmethod @@ -574,8 +569,8 @@ class Xpimd(QuaggaService): cfg += "router igmp\n!\n" cfg += "router pim\n" cfg += " !ip pim rp-address 10.0.0.1\n" - cfg += f" ip pim bsr-candidate {ifname}\n" - cfg += f" ip pim rp-candidate {ifname}\n" + cfg += " ip pim bsr-candidate %s\n" % ifname + cfg += " ip pim rp-candidate %s\n" % ifname cfg += " !ip pim spt-threshold interval 10 bytes 80000\n" return cfg diff --git a/daemon/core/services/sdn.py b/daemon/core/services/sdn.py index a31cf87d..e72b5138 100644 --- a/daemon/core/services/sdn.py +++ b/daemon/core/services/sdn.py @@ -3,6 +3,7 @@ sdn.py defines services to start Open vSwitch and the Ryu SDN Controller. """ import re +from typing import Tuple from core.nodes.base import CoreNode from core.services.coreservices import CoreService @@ -23,15 +24,15 @@ class SdnService(CoreService): class OvsService(SdnService): name: str = "OvsService" group: str = "SDN" - executables: tuple[str, ...] = ("ovs-ofctl", "ovs-vsctl") - dirs: tuple[str, ...] = ( + executables: Tuple[str, ...] = ("ovs-ofctl", "ovs-vsctl") + dirs: Tuple[str, ...] = ( "/etc/openvswitch", "/var/run/openvswitch", "/var/log/openvswitch", ) - configs: tuple[str, ...] = ("OvsService.sh",) - startup: tuple[str, ...] = ("bash OvsService.sh",) - shutdown: tuple[str, ...] = ("killall ovs-vswitchd", "killall ovsdb-server") + configs: Tuple[str, ...] = ("OvsService.sh",) + startup: Tuple[str, ...] = ("bash OvsService.sh",) + shutdown: Tuple[str, ...] = ("killall ovs-vswitchd", "killall ovsdb-server") @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -58,41 +59,39 @@ class OvsService(SdnService): # create virtual interfaces cfg += "## Create a veth pair to send the data to\n" - cfg += f"ip link add rtr{ifnum} type veth peer name sw{ifnum}\n" + cfg += "ip link add rtr%s type veth peer name sw%s\n" % (ifnum, ifnum) # remove ip address of eths because quagga/zebra will assign same IPs to rtr interfaces # or assign them manually to rtr interfaces if zebra is not running for ip4 in iface.ip4s: - cfg += f"ip addr del {ip4.ip} dev {iface.name}\n" + cfg += "ip addr del %s dev %s\n" % (ip4.ip, iface.name) if has_zebra == 0: - cfg += f"ip addr add {ip4.ip} dev rtr{ifnum}\n" + cfg += "ip addr add %s dev rtr%s\n" % (ip4.ip, ifnum) for ip6 in iface.ip6s: - cfg += f"ip -6 addr del {ip6.ip} dev {iface.name}\n" + cfg += "ip -6 addr del %s dev %s\n" % (ip6.ip, iface.name) if has_zebra == 0: - cfg += f"ip -6 addr add {ip6.ip} dev rtr{ifnum}\n" + cfg += "ip -6 addr add %s dev rtr%s\n" % (ip6.ip, ifnum) # add interfaces to bridge - # Make port numbers explicit so they're easier to follow in - # reading the script + # Make port numbers explicit so they're easier to follow in reading the script cfg += "## Add the CORE interface to the switch\n" cfg += ( - f"ovs-vsctl add-port ovsbr0 eth{ifnum} -- " - f"set Interface eth{ifnum} ofport_request={portnum:d}\n" + "ovs-vsctl add-port ovsbr0 eth%s -- set Interface eth%s ofport_request=%d\n" + % (ifnum, ifnum, portnum) ) cfg += "## And then add its sibling veth interface\n" cfg += ( - f"ovs-vsctl add-port ovsbr0 sw{ifnum} -- " - f"set Interface sw{ifnum} ofport_request={portnum + 1:d}\n" + "ovs-vsctl add-port ovsbr0 sw%s -- set Interface sw%s ofport_request=%d\n" + % (ifnum, ifnum, portnum + 1) ) cfg += "## start them up so we can send/receive data\n" - cfg += f"ovs-ofctl mod-port ovsbr0 eth{ifnum} up\n" - cfg += f"ovs-ofctl mod-port ovsbr0 sw{ifnum} up\n" + cfg += "ovs-ofctl mod-port ovsbr0 eth%s up\n" % ifnum + cfg += "ovs-ofctl mod-port ovsbr0 sw%s up\n" % ifnum cfg += "## Bring up the lower part of the veth pair\n" - cfg += f"ip link set dev rtr{ifnum} up\n" + cfg += "ip link set dev rtr%s up\n" % ifnum portnum += 2 - # Add rule for default controller if there is one local - # (even if the controller is not local, it finds it) + # Add rule for default controller if there is one local (even if the controller is not local, it finds it) cfg += "\n## We assume there will be an SDN controller on the other end of this, \n" cfg += "## but it will still function if there's not\n" cfg += "ovs-vsctl set-controller ovsbr0 tcp:127.0.0.1:6633\n" @@ -103,8 +102,14 @@ class OvsService(SdnService): portnum = 1 for iface in node.get_ifaces(control=False): cfg += "## Take the data from the CORE interface and put it on the veth and vice versa\n" - cfg += f"ovs-ofctl add-flow ovsbr0 priority=1000,in_port={portnum:d},action=output:{portnum + 1:d}\n" - cfg += f"ovs-ofctl add-flow ovsbr0 priority=1000,in_port={portnum + 1:d},action=output:{portnum:d}\n" + cfg += ( + "ovs-ofctl add-flow ovsbr0 priority=1000,in_port=%d,action=output:%d\n" + % (portnum, portnum + 1) + ) + cfg += ( + "ovs-ofctl add-flow ovsbr0 priority=1000,in_port=%d,action=output:%d\n" + % (portnum + 1, portnum) + ) portnum += 2 return cfg @@ -112,10 +117,10 @@ class OvsService(SdnService): class RyuService(SdnService): name: str = "ryuService" group: str = "SDN" - executables: tuple[str, ...] = ("ryu-manager",) - configs: tuple[str, ...] = ("ryuService.sh",) - startup: tuple[str, ...] = ("bash ryuService.sh",) - shutdown: tuple[str, ...] = ("killall ryu-manager",) + executables: Tuple[str, ...] = ("ryu-manager",) + configs: Tuple[str, ...] = ("ryuService.sh",) + startup: Tuple[str, ...] = ("bash ryuService.sh",) + shutdown: Tuple[str, ...] = ("killall ryu-manager",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: diff --git a/daemon/core/services/security.py b/daemon/core/services/security.py index afd71a14..788988c9 100644 --- a/daemon/core/services/security.py +++ b/daemon/core/services/security.py @@ -4,22 +4,21 @@ firewall) """ import logging +from typing import Tuple from core import constants from core.nodes.base import CoreNode from core.nodes.interface import CoreInterface from core.services.coreservices import CoreService -logger = logging.getLogger(__name__) - class VPNClient(CoreService): name: str = "VPNClient" group: str = "Security" - configs: tuple[str, ...] = ("vpnclient.sh",) - startup: tuple[str, ...] = ("bash vpnclient.sh",) - shutdown: tuple[str, ...] = ("killall openvpn",) - validate: tuple[str, ...] = ("pidof openvpn",) + configs: Tuple[str, ...] = ("vpnclient.sh",) + startup: Tuple[str, ...] = ("bash vpnclient.sh",) + shutdown: Tuple[str, ...] = ("killall openvpn",) + validate: Tuple[str, ...] = ("pidof openvpn",) custom_needed: bool = True @classmethod @@ -31,10 +30,10 @@ class VPNClient(CoreService): cfg += "# custom VPN Client configuration for service (security.py)\n" fname = f"{constants.CORE_DATA_DIR}/examples/services/sampleVPNClient" try: - with open(fname) as f: + with open(fname, "r") as f: cfg += f.read() - except OSError: - logger.exception( + except IOError: + logging.exception( "error opening VPN client configuration template (%s)", fname ) return cfg @@ -43,10 +42,10 @@ class VPNClient(CoreService): class VPNServer(CoreService): name: str = "VPNServer" group: str = "Security" - configs: tuple[str, ...] = ("vpnserver.sh",) - startup: tuple[str, ...] = ("bash vpnserver.sh",) - shutdown: tuple[str, ...] = ("killall openvpn",) - validate: tuple[str, ...] = ("pidof openvpn",) + configs: Tuple[str, ...] = ("vpnserver.sh",) + startup: Tuple[str, ...] = ("bash vpnserver.sh",) + shutdown: Tuple[str, ...] = ("killall openvpn",) + validate: Tuple[str, ...] = ("pidof openvpn",) custom_needed: bool = True @classmethod @@ -59,10 +58,10 @@ class VPNServer(CoreService): cfg += "# custom VPN Server Configuration for service (security.py)\n" fname = f"{constants.CORE_DATA_DIR}/examples/services/sampleVPNServer" try: - with open(fname) as f: + with open(fname, "r") as f: cfg += f.read() - except OSError: - logger.exception( + except IOError: + logging.exception( "Error opening VPN server configuration template (%s)", fname ) return cfg @@ -71,9 +70,9 @@ class VPNServer(CoreService): class IPsec(CoreService): name: str = "IPsec" group: str = "Security" - configs: tuple[str, ...] = ("ipsec.sh",) - startup: tuple[str, ...] = ("bash ipsec.sh",) - shutdown: tuple[str, ...] = ("killall racoon",) + configs: Tuple[str, ...] = ("ipsec.sh",) + startup: Tuple[str, ...] = ("bash ipsec.sh",) + shutdown: Tuple[str, ...] = ("killall racoon",) custom_needed: bool = True @classmethod @@ -87,18 +86,18 @@ class IPsec(CoreService): cfg += "(security.py)\n" fname = f"{constants.CORE_DATA_DIR}/examples/services/sampleIPsec" try: - with open(fname) as f: + with open(fname, "r") as f: cfg += f.read() - except OSError: - logger.exception("Error opening IPsec configuration template (%s)", fname) + except IOError: + logging.exception("Error opening IPsec configuration template (%s)", fname) return cfg class Firewall(CoreService): name: str = "Firewall" group: str = "Security" - configs: tuple[str, ...] = ("firewall.sh",) - startup: tuple[str, ...] = ("bash firewall.sh",) + configs: Tuple[str, ...] = ("firewall.sh",) + startup: Tuple[str, ...] = ("bash firewall.sh",) custom_needed: bool = True @classmethod @@ -110,10 +109,10 @@ class Firewall(CoreService): cfg += "# custom node firewall rules for service (security.py)\n" fname = f"{constants.CORE_DATA_DIR}/examples/services/sampleFirewall" try: - with open(fname) as f: + with open(fname, "r") as f: cfg += f.read() - except OSError: - logger.exception( + except IOError: + logging.exception( "Error opening Firewall configuration template (%s)", fname ) return cfg @@ -126,9 +125,9 @@ class Nat(CoreService): name: str = "NAT" group: str = "Security" - executables: tuple[str, ...] = ("iptables",) - configs: tuple[str, ...] = ("nat.sh",) - startup: tuple[str, ...] = ("bash nat.sh",) + executables: Tuple[str, ...] = ("iptables",) + configs: Tuple[str, ...] = ("nat.sh",) + startup: Tuple[str, ...] = ("bash nat.sh",) custom_needed: bool = False @classmethod diff --git a/daemon/core/services/ucarp.py b/daemon/core/services/ucarp.py index c6f2256e..522eeaf6 100644 --- a/daemon/core/services/ucarp.py +++ b/daemon/core/services/ucarp.py @@ -1,6 +1,7 @@ """ ucarp.py: defines high-availability IP address controlled by ucarp """ +from typing import Tuple from core.nodes.base import CoreNode from core.services.coreservices import CoreService @@ -11,16 +12,16 @@ UCARP_ETC = "/usr/local/etc/ucarp" class Ucarp(CoreService): name: str = "ucarp" group: str = "Utility" - dirs: tuple[str, ...] = (UCARP_ETC,) - configs: tuple[str, ...] = ( + dirs: Tuple[str, ...] = (UCARP_ETC,) + configs: Tuple[str, ...] = ( UCARP_ETC + "/default.sh", UCARP_ETC + "/default-up.sh", UCARP_ETC + "/default-down.sh", "ucarpboot.sh", ) - startup: tuple[str, ...] = ("bash ucarpboot.sh",) - shutdown: tuple[str, ...] = ("killall ucarp",) - validate: tuple[str, ...] = ("pidof ucarp",) + startup: Tuple[str, ...] = ("bash ucarpboot.sh",) + shutdown: Tuple[str, ...] = ("killall ucarp",) + validate: Tuple[str, ...] = ("pidof ucarp",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -43,14 +44,16 @@ class Ucarp(CoreService): """ Returns configuration file text. """ - ucarp_bin = node.session.options.get("ucarp_bin", "/usr/sbin/ucarp") - return f"""\ + ucarp_bin = node.session.options.get_config( + "ucarp_bin", default="/usr/sbin/ucarp" + ) + return """\ #!/bin/sh # Location of UCARP executable -UCARP_EXEC={ucarp_bin} +UCARP_EXEC=%s # Location of the UCARP config directory -UCARP_CFGDIR={UCARP_ETC} +UCARP_CFGDIR=%s # Logging Facility FACILITY=daemon @@ -91,34 +94,40 @@ OPTIONS="-z -n -M" # Send extra parameter to down and up scripts #XPARAM="-x " -XPARAM="-x ${{VIRTUAL_NET}}" +XPARAM="-x ${VIRTUAL_NET}" # The start and stop scripts -START_SCRIPT=${{UCARP_CFGDIR}}/default-up.sh -STOP_SCRIPT=${{UCARP_CFGDIR}}/default-down.sh +START_SCRIPT=${UCARP_CFGDIR}/default-up.sh +STOP_SCRIPT=${UCARP_CFGDIR}/default-down.sh # These line should not need to be touched UCARP_OPTS="$OPTIONS -b $UCARP_BASE -k $SKEW -i $INTERFACE -v $INSTANCE_ID -p $PASSWORD -u $START_SCRIPT -d $STOP_SCRIPT -a $VIRTUAL_ADDRESS -s $SOURCE_ADDRESS -f $FACILITY $XPARAM" -${{UCARP_EXEC}} -B ${{UCARP_OPTS}} -""" +${UCARP_EXEC} -B ${UCARP_OPTS} +""" % ( + ucarp_bin, + UCARP_ETC, + ) @classmethod def generate_ucarp_boot(cls, node: CoreNode) -> str: """ Generate a shell script used to boot the Ucarp daemons. """ - return f"""\ + return ( + """\ #!/bin/sh # Location of the UCARP config directory -UCARP_CFGDIR={UCARP_ETC} +UCARP_CFGDIR=%s -chmod a+x ${{UCARP_CFGDIR}}/*.sh +chmod a+x ${UCARP_CFGDIR}/*.sh # Start the default ucarp daemon configuration -${{UCARP_CFGDIR}}/default.sh +${UCARP_CFGDIR}/default.sh """ + % UCARP_ETC + ) @classmethod def generate_vip_up(cls, node: CoreNode) -> str: diff --git a/daemon/core/services/utility.py b/daemon/core/services/utility.py index e83cb9d5..d7bd2edb 100644 --- a/daemon/core/services/utility.py +++ b/daemon/core/services/utility.py @@ -1,7 +1,7 @@ """ utility.py: defines miscellaneous utility services. """ -from typing import Optional +from typing import Optional, Tuple import netaddr @@ -27,8 +27,8 @@ class UtilService(CoreService): class IPForwardService(UtilService): name: str = "IPForward" - configs: tuple[str, ...] = ("ipforward.sh",) - startup: tuple[str, ...] = ("bash ipforward.sh",) + configs: Tuple[str, ...] = ("ipforward.sh",) + startup: Tuple[str, ...] = ("bash ipforward.sh",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -36,30 +36,32 @@ class IPForwardService(UtilService): @classmethod def generateconfiglinux(cls, node: CoreNode, filename: str) -> str: - cfg = f"""\ + cfg = """\ #!/bin/sh # auto-generated by IPForward service (utility.py) -{SYSCTL} -w net.ipv4.conf.all.forwarding=1 -{SYSCTL} -w net.ipv4.conf.default.forwarding=1 -{SYSCTL} -w net.ipv6.conf.all.forwarding=1 -{SYSCTL} -w net.ipv6.conf.default.forwarding=1 -{SYSCTL} -w net.ipv4.conf.all.send_redirects=0 -{SYSCTL} -w net.ipv4.conf.default.send_redirects=0 -{SYSCTL} -w net.ipv4.conf.all.rp_filter=0 -{SYSCTL} -w net.ipv4.conf.default.rp_filter=0 -""" +%(sysctl)s -w net.ipv4.conf.all.forwarding=1 +%(sysctl)s -w net.ipv4.conf.default.forwarding=1 +%(sysctl)s -w net.ipv6.conf.all.forwarding=1 +%(sysctl)s -w net.ipv6.conf.default.forwarding=1 +%(sysctl)s -w net.ipv4.conf.all.send_redirects=0 +%(sysctl)s -w net.ipv4.conf.default.send_redirects=0 +%(sysctl)s -w net.ipv4.conf.all.rp_filter=0 +%(sysctl)s -w net.ipv4.conf.default.rp_filter=0 +""" % { + "sysctl": SYSCTL + } for iface in node.get_ifaces(): name = utils.sysctl_devname(iface.name) - cfg += f"{SYSCTL} -w net.ipv4.conf.{name}.forwarding=1\n" - cfg += f"{SYSCTL} -w net.ipv4.conf.{name}.send_redirects=0\n" - cfg += f"{SYSCTL} -w net.ipv4.conf.{name}.rp_filter=0\n" + cfg += "%s -w net.ipv4.conf.%s.forwarding=1\n" % (SYSCTL, name) + cfg += "%s -w net.ipv4.conf.%s.send_redirects=0\n" % (SYSCTL, name) + cfg += "%s -w net.ipv4.conf.%s.rp_filter=0\n" % (SYSCTL, name) return cfg class DefaultRouteService(UtilService): name: str = "DefaultRoute" - configs: tuple[str, ...] = ("defaultroute.sh",) - startup: tuple[str, ...] = ("bash defaultroute.sh",) + configs: Tuple[str, ...] = ("defaultroute.sh",) + startup: Tuple[str, ...] = ("bash defaultroute.sh",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -81,8 +83,8 @@ class DefaultRouteService(UtilService): class DefaultMulticastRouteService(UtilService): name: str = "DefaultMulticastRoute" - configs: tuple[str, ...] = ("defaultmroute.sh",) - startup: tuple[str, ...] = ("bash defaultmroute.sh",) + configs: Tuple[str, ...] = ("defaultmroute.sh",) + startup: Tuple[str, ...] = ("bash defaultmroute.sh",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -92,7 +94,7 @@ class DefaultMulticastRouteService(UtilService): cfg += "as needed\n" for iface in node.get_ifaces(control=False): rtcmd = "ip route add 224.0.0.0/4 dev" - cfg += f"{rtcmd} {iface.name}\n" + cfg += "%s %s\n" % (rtcmd, iface.name) cfg += "\n" break return cfg @@ -100,8 +102,8 @@ class DefaultMulticastRouteService(UtilService): class StaticRouteService(UtilService): name: str = "StaticRoute" - configs: tuple[str, ...] = ("staticroute.sh",) - startup: tuple[str, ...] = ("bash staticroute.sh",) + configs: Tuple[str, ...] = ("staticroute.sh",) + startup: Tuple[str, ...] = ("bash staticroute.sh",) custom_needed: bool = True @classmethod @@ -125,16 +127,16 @@ class StaticRouteService(UtilService): if ip[-2] == ip[1]: return "" else: - rtcmd = f"#/sbin/ip route add {dst} via" - return f"{rtcmd} {ip[1]}" + rtcmd = "#/sbin/ip route add %s via" % dst + return "%s %s" % (rtcmd, ip[1]) class SshService(UtilService): name: str = "SSH" - configs: tuple[str, ...] = ("startsshd.sh", "/etc/ssh/sshd_config") - dirs: tuple[str, ...] = ("/etc/ssh", "/var/run/sshd") - startup: tuple[str, ...] = ("bash startsshd.sh",) - shutdown: tuple[str, ...] = ("killall sshd",) + configs: Tuple[str, ...] = ("startsshd.sh", "/etc/ssh/sshd_config") + dirs: Tuple[str, ...] = ("/etc/ssh", "/var/run/sshd") + startup: Tuple[str, ...] = ("bash startsshd.sh",) + shutdown: Tuple[str, ...] = ("killall sshd",) validation_mode: ServiceMode = ServiceMode.BLOCKING @classmethod @@ -147,22 +149,26 @@ class SshService(UtilService): sshstatedir = cls.dirs[1] sshlibdir = "/usr/lib/openssh" if filename == "startsshd.sh": - return f"""\ + return """\ #!/bin/sh # auto-generated by SSH service (utility.py) -ssh-keygen -q -t rsa -N "" -f {sshcfgdir}/ssh_host_rsa_key -chmod 655 {sshstatedir} +ssh-keygen -q -t rsa -N "" -f %s/ssh_host_rsa_key +chmod 655 %s # wait until RSA host key has been generated to launch sshd -/usr/sbin/sshd -f {sshcfgdir}/sshd_config -""" +/usr/sbin/sshd -f %s/sshd_config +""" % ( + sshcfgdir, + sshstatedir, + sshcfgdir, + ) else: - return f"""\ + return """\ # auto-generated by SSH service (utility.py) Port 22 Protocol 2 -HostKey {sshcfgdir}/ssh_host_rsa_key +HostKey %s/ssh_host_rsa_key UsePrivilegeSeparation yes -PidFile {sshstatedir}/sshd.pid +PidFile %s/sshd.pid KeyRegenerationInterval 3600 ServerKeyBits 768 @@ -191,19 +197,23 @@ PrintLastLog yes TCPKeepAlive yes AcceptEnv LANG LC_* -Subsystem sftp {sshlibdir}/sftp-server +Subsystem sftp %s/sftp-server UsePAM yes UseDNS no -""" +""" % ( + sshcfgdir, + sshstatedir, + sshlibdir, + ) class DhcpService(UtilService): name: str = "DHCP" - configs: tuple[str, ...] = ("/etc/dhcp/dhcpd.conf",) - dirs: tuple[str, ...] = ("/etc/dhcp", "/var/lib/dhcp") - startup: tuple[str, ...] = ("touch /var/lib/dhcp/dhcpd.leases", "dhcpd") - shutdown: tuple[str, ...] = ("killall dhcpd",) - validate: tuple[str, ...] = ("pidof dhcpd",) + configs: Tuple[str, ...] = ("/etc/dhcp/dhcpd.conf",) + dirs: Tuple[str, ...] = ("/etc/dhcp", "/var/lib/dhcp") + startup: Tuple[str, ...] = ("touch /var/lib/dhcp/dhcpd.leases", "dhcpd") + shutdown: Tuple[str, ...] = ("killall dhcpd",) + validate: Tuple[str, ...] = ("pidof dhcpd",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -226,7 +236,7 @@ max-lease-time 7200; ddns-update-style none; """ for iface in node.get_ifaces(control=False): - cfg += "\n".join(map(cls.subnetentry, iface.ip4s)) + cfg += "\n".join(map(cls.subnetentry, iface.ips())) cfg += "\n" return cfg @@ -236,21 +246,29 @@ ddns-update-style none; Generate a subnet declaration block given an IPv4 prefix string for inclusion in the dhcpd3 config file. """ - if ip.size == 1: + address = str(ip.ip) + if netaddr.valid_ipv6(address): return "" - # divide the address space in half - index = (ip.size - 2) / 2 - rangelow = ip[index] - rangehigh = ip[-2] - return f""" -subnet {ip.cidr.ip} netmask {ip.netmask} {{ - pool {{ - range {rangelow} {rangehigh}; + else: + # divide the address space in half + index = (ip.size - 2) / 2 + rangelow = ip[index] + rangehigh = ip[-2] + return """ +subnet %s netmask %s { + pool { + range %s %s; default-lease-time 600; - option routers {ip.ip}; - }} -}} -""" + option routers %s; + } +} +""" % ( + ip.ip, + ip.netmask, + rangelow, + rangehigh, + address, + ) class DhcpClientService(UtilService): @@ -259,10 +277,10 @@ class DhcpClientService(UtilService): """ name: str = "DHCPClient" - configs: tuple[str, ...] = ("startdhcpclient.sh",) - startup: tuple[str, ...] = ("bash startdhcpclient.sh",) - shutdown: tuple[str, ...] = ("killall dhclient",) - validate: tuple[str, ...] = ("pidof dhclient",) + configs: Tuple[str, ...] = ("startdhcpclient.sh",) + startup: Tuple[str, ...] = ("bash startdhcpclient.sh",) + shutdown: Tuple[str, ...] = ("killall dhclient",) + validate: Tuple[str, ...] = ("pidof dhclient",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -275,10 +293,10 @@ class DhcpClientService(UtilService): cfg += "side DNS\n# resolution based on the DHCP server response.\n" cfg += "#mkdir -p /var/run/resolvconf/interface\n" for iface in node.get_ifaces(control=False): - cfg += f"#ln -s /var/run/resolvconf/interface/{iface.name}.dhclient" + cfg += "#ln -s /var/run/resolvconf/interface/%s.dhclient" % iface.name cfg += " /var/run/resolvconf/resolv.conf\n" - cfg += f"/sbin/dhclient -nw -pf /var/run/dhclient-{iface.name}.pid" - cfg += f" -lf /var/run/dhclient-{iface.name}.lease {iface.name}\n" + cfg += "/sbin/dhclient -nw -pf /var/run/dhclient-%s.pid" % iface.name + cfg += " -lf /var/run/dhclient-%s.lease %s\n" % (iface.name, iface.name) return cfg @@ -288,11 +306,11 @@ class FtpService(UtilService): """ name: str = "FTP" - configs: tuple[str, ...] = ("vsftpd.conf",) - dirs: tuple[str, ...] = ("/var/run/vsftpd/empty", "/var/ftp") - startup: tuple[str, ...] = ("vsftpd ./vsftpd.conf",) - shutdown: tuple[str, ...] = ("killall vsftpd",) - validate: tuple[str, ...] = ("pidof vsftpd",) + configs: Tuple[str, ...] = ("vsftpd.conf",) + dirs: Tuple[str, ...] = ("/var/run/vsftpd/empty", "/var/ftp") + startup: Tuple[str, ...] = ("vsftpd ./vsftpd.conf",) + shutdown: Tuple[str, ...] = ("killall vsftpd",) + validate: Tuple[str, ...] = ("pidof vsftpd",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -321,12 +339,12 @@ class HttpService(UtilService): """ name: str = "HTTP" - configs: tuple[str, ...] = ( + configs: Tuple[str, ...] = ( "/etc/apache2/apache2.conf", "/etc/apache2/envvars", "/var/www/index.html", ) - dirs: tuple[str, ...] = ( + dirs: Tuple[str, ...] = ( "/etc/apache2", "/var/run/apache2", "/var/log/apache2", @@ -334,9 +352,9 @@ class HttpService(UtilService): "/var/lock/apache2", "/var/www", ) - startup: tuple[str, ...] = ("chown www-data /var/lock/apache2", "apache2ctl start") - shutdown: tuple[str, ...] = ("apache2ctl stop",) - validate: tuple[str, ...] = ("pidof apache2",) + startup: Tuple[str, ...] = ("chown www-data /var/lock/apache2", "apache2ctl start") + shutdown: Tuple[str, ...] = ("apache2ctl stop",) + validate: Tuple[str, ...] = ("pidof apache2",) APACHEVER22: int = 22 APACHEVER24: int = 24 @@ -522,15 +540,18 @@ export LANG @classmethod def generatehtml(cls, node: CoreNode, filename: str) -> str: - body = f"""\ + body = ( + """\ -

{node.name} web server

+

%s web server

This is the default web page for this server.

The web server software is running but no content has been added, yet.

""" + % node.name + ) for iface in node.get_ifaces(control=False): - body += f"
  • {iface.name} - {[str(x) for x in iface.ips()]}
  • \n" - return f"{body}" + body += "
  • %s - %s
  • \n" % (iface.name, [str(x) for x in iface.ips()]) + return "%s" % body class PcapService(UtilService): @@ -539,10 +560,10 @@ class PcapService(UtilService): """ name: str = "pcap" - configs: tuple[str, ...] = ("pcap.sh",) - startup: tuple[str, ...] = ("bash pcap.sh start",) - shutdown: tuple[str, ...] = ("bash pcap.sh stop",) - validate: tuple[str, ...] = ("pidof tcpdump",) + configs: Tuple[str, ...] = ("pcap.sh",) + startup: Tuple[str, ...] = ("bash pcap.sh start",) + shutdown: Tuple[str, ...] = ("bash pcap.sh stop",) + validate: Tuple[str, ...] = ("pidof tcpdump",) meta: str = "logs network traffic to pcap packet capture files" @classmethod @@ -560,12 +581,14 @@ if [ "x$1" = "xstart" ]; then """ for iface in node.get_ifaces(): - if iface.control: + if hasattr(iface, "control") and iface.control is True: cfg += "# " redir = "< /dev/null" - cfg += ( - f"tcpdump ${{DUMPOPTS}} -w {node.name}.{iface.name}.pcap " - f"-i {iface.name} {redir} &\n" + cfg += "tcpdump ${DUMPOPTS} -w %s.%s.pcap -i %s %s &\n" % ( + node.name, + iface.name, + iface.name, + redir, ) cfg += """ @@ -579,13 +602,13 @@ fi; class RadvdService(UtilService): name: str = "radvd" - configs: tuple[str, ...] = ("/etc/radvd/radvd.conf",) - dirs: tuple[str, ...] = ("/etc/radvd", "/var/run/radvd") - startup: tuple[str, ...] = ( + configs: Tuple[str, ...] = ("/etc/radvd/radvd.conf",) + dirs: Tuple[str, ...] = ("/etc/radvd", "/var/run/radvd") + startup: Tuple[str, ...] = ( "radvd -C /etc/radvd/radvd.conf -m logfile -l /var/log/radvd.log", ) - shutdown: tuple[str, ...] = ("pkill radvd",) - validate: tuple[str, ...] = ("pidof radvd",) + shutdown: Tuple[str, ...] = ("pkill radvd",) + validate: Tuple[str, ...] = ("pidof radvd",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -598,26 +621,32 @@ class RadvdService(UtilService): prefixes = list(map(cls.subnetentry, iface.ips())) if len(prefixes) < 1: continue - cfg += f"""\ -interface {iface.name} -{{ + cfg += ( + """\ +interface %s +{ AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; """ + % iface.name + ) for prefix in prefixes: if prefix == "": continue - cfg += f"""\ - prefix {prefix} - {{ + cfg += ( + """\ + prefix %s + { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; - }}; + }; """ + % prefix + ) cfg += "};\n" return cfg @@ -640,10 +669,10 @@ class AtdService(UtilService): """ name: str = "atd" - configs: tuple[str, ...] = ("startatd.sh",) - dirs: tuple[str, ...] = ("/var/spool/cron/atjobs", "/var/spool/cron/atspool") - startup: tuple[str, ...] = ("bash startatd.sh",) - shutdown: tuple[str, ...] = ("pkill atd",) + configs: Tuple[str, ...] = ("startatd.sh",) + dirs: Tuple[str, ...] = ("/var/spool/cron/atjobs", "/var/spool/cron/atspool") + startup: Tuple[str, ...] = ("bash startatd.sh",) + shutdown: Tuple[str, ...] = ("pkill atd",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: diff --git a/daemon/core/services/xorp.py b/daemon/core/services/xorp.py index ac29b299..485fe159 100644 --- a/daemon/core/services/xorp.py +++ b/daemon/core/services/xorp.py @@ -2,7 +2,7 @@ xorp.py: defines routing services provided by the XORP routing suite. """ -from typing import Optional +from typing import Optional, Tuple import netaddr @@ -19,14 +19,15 @@ class XorpRtrmgr(CoreService): name: str = "xorp_rtrmgr" group: str = "XORP" - executables: tuple[str, ...] = ("xorp_rtrmgr",) - dirs: tuple[str, ...] = ("/etc/xorp",) - configs: tuple[str, ...] = ("/etc/xorp/config.boot",) - startup: tuple[ - str, ... - ] = f"xorp_rtrmgr -d -b {configs[0]} -l /var/log/{name}.log -P /var/run/{name}.pid" - shutdown: tuple[str, ...] = ("killall xorp_rtrmgr",) - validate: tuple[str, ...] = ("pidof xorp_rtrmgr",) + executables: Tuple[str, ...] = ("xorp_rtrmgr",) + dirs: Tuple[str, ...] = ("/etc/xorp",) + configs: Tuple[str, ...] = ("/etc/xorp/config.boot",) + startup: Tuple[str, ...] = ( + "xorp_rtrmgr -d -b %s -l /var/log/%s.log -P /var/run/%s.pid" + % (configs[0], name, name), + ) + shutdown: Tuple[str, ...] = ("killall xorp_rtrmgr",) + validate: Tuple[str, ...] = ("pidof xorp_rtrmgr",) @classmethod def generate_config(cls, node: CoreNode, filename: str) -> str: @@ -37,8 +38,8 @@ class XorpRtrmgr(CoreService): """ cfg = "interfaces {\n" for iface in node.get_ifaces(): - cfg += f" interface {iface.name} {{\n" - cfg += f"\tvif {iface.name} {{\n" + cfg += " interface %s {\n" % iface.name + cfg += "\tvif %s {\n" % iface.name cfg += "".join(map(cls.addrstr, iface.ips())) cfg += cls.lladdrstr(iface) cfg += "\t}\n" @@ -58,8 +59,8 @@ class XorpRtrmgr(CoreService): """ helper for mapping IP addresses to XORP config statements """ - cfg = f"\t address {ip.ip} {{\n" - cfg += f"\t\tprefix-length: {ip.prefixlen}\n" + cfg = "\t address %s {\n" % ip.ip + cfg += "\t\tprefix-length: %s\n" % ip.prefixlen cfg += "\t }\n" return cfg @@ -68,7 +69,7 @@ class XorpRtrmgr(CoreService): """ helper for adding link-local address entries (required by OSPFv3) """ - cfg = f"\t address {iface.mac.eui64()} {{\n" + cfg = "\t address %s {\n" % iface.mac.eui64() cfg += "\t\tprefix-length: 64\n" cfg += "\t }\n" return cfg @@ -82,8 +83,8 @@ class XorpService(CoreService): name: Optional[str] = None group: str = "XORP" - executables: tuple[str, ...] = ("xorp_rtrmgr",) - dependencies: tuple[str, ...] = ("xorp_rtrmgr",) + executables: Tuple[str, ...] = ("xorp_rtrmgr",) + dependencies: Tuple[str, ...] = ("xorp_rtrmgr",) meta: str = ( "The config file for this service can be found in the xorp_rtrmgr service." ) @@ -94,7 +95,7 @@ class XorpService(CoreService): Helper to add a forwarding engine entry to the config file. """ cfg = "fea {\n" - cfg += f" {forwarding} {{\n" + cfg += " %s {\n" % forwarding cfg += "\tdisable:false\n" cfg += " }\n" cfg += "}\n" @@ -110,10 +111,10 @@ class XorpService(CoreService): names.append(iface.name) names.append("register_vif") cfg = "plumbing {\n" - cfg += f" {forwarding} {{\n" + cfg += " %s {\n" % forwarding for name in names: - cfg += f"\tinterface {name} {{\n" - cfg += f"\t vif {name} {{\n" + cfg += "\tinterface %s {\n" % name + cfg += "\t vif %s {\n" % name cfg += "\t\tdisable: false\n" cfg += "\t }\n" cfg += "\t}\n" @@ -172,13 +173,13 @@ class XorpOspfv2(XorpService): rtrid = cls.router_id(node) cfg += "\nprotocols {\n" cfg += " ospf4 {\n" - cfg += f"\trouter-id: {rtrid}\n" + cfg += "\trouter-id: %s\n" % rtrid cfg += "\tarea 0.0.0.0 {\n" for iface in node.get_ifaces(control=False): - cfg += f"\t interface {iface.name} {{\n" - cfg += f"\t\tvif {iface.name} {{\n" + cfg += "\t interface %s {\n" % iface.name + cfg += "\t\tvif %s {\n" % iface.name for ip4 in iface.ip4s: - cfg += f"\t\t address {ip4.ip} {{\n" + cfg += "\t\t address %s {\n" % ip4.ip cfg += "\t\t }\n" cfg += "\t\t}\n" cfg += "\t }\n" @@ -203,11 +204,11 @@ class XorpOspfv3(XorpService): rtrid = cls.router_id(node) cfg += "\nprotocols {\n" cfg += " ospf6 0 { /* Instance ID 0 */\n" - cfg += f"\trouter-id: {rtrid}\n" + cfg += "\trouter-id: %s\n" % rtrid cfg += "\tarea 0.0.0.0 {\n" for iface in node.get_ifaces(control=False): - cfg += f"\t interface {iface.name} {{\n" - cfg += f"\t\tvif {iface.name} {{\n" + cfg += "\t interface %s {\n" % iface.name + cfg += "\t\tvif %s {\n" % iface.name cfg += "\t\t}\n" cfg += "\t }\n" cfg += "\t}\n" @@ -233,7 +234,7 @@ class XorpBgp(XorpService): rtrid = cls.router_id(node) cfg += "\nprotocols {\n" cfg += " bgp {\n" - cfg += f"\tbgp-id: {rtrid}\n" + cfg += "\tbgp-id: %s\n" % rtrid cfg += "\tlocal-as: 65001 /* change this */\n" cfg += '\texport: "export-connected"\n' cfg += "\tpeer 10.0.1.1 { /* change this */\n" @@ -261,10 +262,10 @@ class XorpRip(XorpService): cfg += " rip {\n" cfg += '\texport: "export-connected"\n' for iface in node.get_ifaces(control=False): - cfg += f"\tinterface {iface.name} {{\n" - cfg += f"\t vif {iface.name} {{\n" + cfg += "\tinterface %s {\n" % iface.name + cfg += "\t vif %s {\n" % iface.name for ip4 in iface.ip4s: - cfg += f"\t\taddress {ip4.ip} {{\n" + cfg += "\t\taddress %s {\n" % ip4.ip cfg += "\t\t disable: false\n" cfg += "\t\t}\n" cfg += "\t }\n" @@ -289,9 +290,9 @@ class XorpRipng(XorpService): cfg += " ripng {\n" cfg += '\texport: "export-connected"\n' for iface in node.get_ifaces(control=False): - cfg += f"\tinterface {iface.name} {{\n" - cfg += f"\t vif {iface.name} {{\n" - cfg += f"\t\taddress {iface.mac.eui64()} {{\n" + cfg += "\tinterface %s {\n" % iface.name + cfg += "\t vif %s {\n" % iface.name + cfg += "\t\taddress %s {\n" % iface.mac.eui64() cfg += "\t\t disable: false\n" cfg += "\t\t}\n" cfg += "\t }\n" @@ -316,8 +317,8 @@ class XorpPimSm4(XorpService): names = [] for iface in node.get_ifaces(control=False): names.append(iface.name) - cfg += f"\tinterface {iface.name} {{\n" - cfg += f"\t vif {iface.name} {{\n" + cfg += "\tinterface %s {\n" % iface.name + cfg += "\t vif %s {\n" % iface.name cfg += "\t\tdisable: false\n" cfg += "\t }\n" cfg += "\t}\n" @@ -328,20 +329,20 @@ class XorpPimSm4(XorpService): names.append("register_vif") for name in names: - cfg += f"\tinterface {name} {{\n" - cfg += f"\t vif {name} {{\n" + cfg += "\tinterface %s {\n" % name + cfg += "\t vif %s {\n" % name cfg += "\t\tdr-priority: 1\n" cfg += "\t }\n" cfg += "\t}\n" cfg += "\tbootstrap {\n" cfg += "\t cand-bsr {\n" cfg += "\t\tscope-zone 224.0.0.0/4 {\n" - cfg += f'\t\t cand-bsr-by-vif-name: "{names[0]}"\n' + cfg += '\t\t cand-bsr-by-vif-name: "%s"\n' % names[0] cfg += "\t\t}\n" cfg += "\t }\n" cfg += "\t cand-rp {\n" cfg += "\t\tgroup-prefix 224.0.0.0/4 {\n" - cfg += f'\t\t cand-rp-by-vif-name: "{names[0]}"\n' + cfg += '\t\t cand-rp-by-vif-name: "%s"\n' % names[0] cfg += "\t\t}\n" cfg += "\t }\n" cfg += "\t}\n" @@ -370,8 +371,8 @@ class XorpPimSm6(XorpService): names = [] for iface in node.get_ifaces(control=False): names.append(iface.name) - cfg += f"\tinterface {iface.name} {{\n" - cfg += f"\t vif {iface.name} {{\n" + cfg += "\tinterface %s {\n" % iface.name + cfg += "\t vif %s {\n" % iface.name cfg += "\t\tdisable: false\n" cfg += "\t }\n" cfg += "\t}\n" @@ -382,20 +383,20 @@ class XorpPimSm6(XorpService): names.append("register_vif") for name in names: - cfg += f"\tinterface {name} {{\n" - cfg += f"\t vif {name} {{\n" + cfg += "\tinterface %s {\n" % name + cfg += "\t vif %s {\n" % name cfg += "\t\tdr-priority: 1\n" cfg += "\t }\n" cfg += "\t}\n" cfg += "\tbootstrap {\n" cfg += "\t cand-bsr {\n" cfg += "\t\tscope-zone ff00::/8 {\n" - cfg += f'\t\t cand-bsr-by-vif-name: "{names[0]}"\n' + cfg += '\t\t cand-bsr-by-vif-name: "%s"\n' % names[0] cfg += "\t\t}\n" cfg += "\t }\n" cfg += "\t cand-rp {\n" cfg += "\t\tgroup-prefix ff00::/8 {\n" - cfg += f'\t\t cand-rp-by-vif-name: "{names[0]}"\n' + cfg += '\t\t cand-rp-by-vif-name: "%s"\n' % names[0] cfg += "\t\t}\n" cfg += "\t }\n" cfg += "\t}\n" @@ -422,12 +423,12 @@ class XorpOlsr(XorpService): rtrid = cls.router_id(node) cfg += "\nprotocols {\n" cfg += " olsr4 {\n" - cfg += f"\tmain-address: {rtrid}\n" + cfg += "\tmain-address: %s\n" % rtrid for iface in node.get_ifaces(control=False): - cfg += f"\tinterface {iface.name} {{\n" - cfg += f"\t vif {iface.name} {{\n" + cfg += "\tinterface %s {\n" % iface.name + cfg += "\t vif %s {\n" % iface.name for ip4 in iface.ip4s: - cfg += f"\t\taddress {ip4.ip} {{\n" + cfg += "\t\taddress %s {\n" % ip4.ip cfg += "\t\t}\n" cfg += "\t }\n" cfg += "\t}\n" diff --git a/daemon/core/utils.py b/daemon/core/utils.py index df00984c..4a9d6ca6 100644 --- a/daemon/core/utils.py +++ b/daemon/core/utils.py @@ -15,22 +15,28 @@ import random import shlex import shutil import sys -import threading -from collections import OrderedDict -from collections.abc import Iterable from pathlib import Path -from queue import Queue from subprocess import PIPE, STDOUT, Popen -from typing import TYPE_CHECKING, Any, Callable, Generic, Optional, TypeVar, Union +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Dict, + Generic, + Iterable, + List, + Optional, + Tuple, + Type, + TypeVar, + Union, +) import netaddr from core.errors import CoreCommandError, CoreError -logger = logging.getLogger(__name__) - if TYPE_CHECKING: - from core.emulator.coreemu import CoreEmu from core.emulator.session import Session from core.nodes.base import CoreNode T = TypeVar("T") @@ -39,29 +45,12 @@ DEVNULL = open(os.devnull, "wb") IFACE_CONFIG_FACTOR: int = 1000 -def execute_script(coreemu: "CoreEmu", file_path: Path, args: str) -> None: - """ - Provides utility function to execute a python script in context of the - provide coreemu instance. - - :param coreemu: coreemu to provide to script - :param file_path: python script to execute - :param args: args to provide script - :return: nothing - """ - sys.argv = shlex.split(args) - thread = threading.Thread( - target=execute_file, args=(file_path, {"coreemu": coreemu}), daemon=True - ) - thread.start() - thread.join() - - def execute_file( - path: Path, exec_globals: dict[str, str] = None, exec_locals: dict[str, str] = None + path: str, exec_globals: Dict[str, str] = None, exec_locals: Dict[str, str] = None ) -> None: """ - Provides a way to execute a file. + Provides an alternative way to run execfile to be compatible for + both python2/3. :param path: path of file to execute :param exec_globals: globals values to pass to execution @@ -70,10 +59,10 @@ def execute_file( """ if exec_globals is None: exec_globals = {} - exec_globals.update({"__file__": str(path), "__name__": "__main__"}) - with path.open("rb") as f: + exec_globals.update({"__file__": path, "__name__": "__main__"}) + with open(path, "rb") as f: data = compile(f.read(), path, "exec") - exec(data, exec_globals, exec_locals) + exec(data, exec_globals, exec_locals) def hashkey(value: Union[str, int]) -> int: @@ -87,7 +76,7 @@ def hashkey(value: Union[str, int]) -> int: """ if isinstance(value, int): value = str(value) - value = value.encode() + value = value.encode("utf-8") return int(hashlib.sha256(value).hexdigest(), 16) @@ -103,23 +92,28 @@ def _detach_init() -> None: os.setsid() -def _valid_module(path: Path) -> bool: +def _valid_module(path: str, file_name: str) -> bool: """ Check if file is a valid python module. :param path: path to file + :param file_name: file name to check :return: True if a valid python module file, False otherwise """ - if not path.is_file(): + file_path = os.path.join(path, file_name) + if not os.path.isfile(file_path): return False - if path.name.startswith("_"): + + if file_name.startswith("_"): return False - if not path.suffix == ".py": + + if not file_name.endswith(".py"): return False + return True -def _is_class(module: Any, member: type, clazz: type) -> bool: +def _is_class(module: Any, member: Type, clazz: Type) -> bool: """ Validates if a module member is a class and an instance of a CoreService. @@ -130,10 +124,13 @@ def _is_class(module: Any, member: type, clazz: type) -> bool: """ if not inspect.isclass(member): return False + if not issubclass(member, clazz): return False + if member.__module__ != module.__name__: return False + return True @@ -163,7 +160,7 @@ def which(command: str, required: bool) -> str: return found_path -def make_tuple_fromstr(s: str, value_type: Callable[[str], T]) -> tuple[T]: +def make_tuple_fromstr(s: str, value_type: Callable[[str], T]) -> Tuple[T]: """ Create a tuple from a string. @@ -181,7 +178,7 @@ def make_tuple_fromstr(s: str, value_type: Callable[[str], T]) -> tuple[T]: return tuple(value_type(i) for i in values) -def mute_detach(args: str, **kwargs: dict[str, Any]) -> int: +def mute_detach(args: str, **kwargs: Dict[str, Any]) -> int: """ Run a muted detached process by forking it. @@ -198,13 +195,14 @@ def mute_detach(args: str, **kwargs: dict[str, Any]) -> int: def cmd( args: str, - env: dict[str, str] = None, - cwd: Path = None, + env: Dict[str, str] = None, + cwd: str = None, wait: bool = True, shell: bool = False, ) -> str: """ - Execute a command on the host and returns the combined stderr stdout output. + Execute a command on the host and return a tuple containing the exit status and + result string. stderr output is folded into the stdout result string. :param args: command arguments :param env: environment to run command with @@ -215,8 +213,7 @@ def cmd( :raises CoreCommandError: when there is a non-zero exit status or the file to execute is not found """ - logger.debug("command cwd(%s) wait(%s): %s", cwd, wait, args) - input_args = args + logging.debug("command cwd(%s) wait(%s): %s", cwd, wait, args) if shell is False: args = shlex.split(args) try: @@ -224,36 +221,17 @@ def cmd( p = Popen(args, stdout=output, stderr=output, env=env, cwd=cwd, shell=shell) if wait: stdout, stderr = p.communicate() - stdout = stdout.decode().strip() - stderr = stderr.decode().strip() - status = p.returncode + stdout = stdout.decode("utf-8").strip() + stderr = stderr.decode("utf-8").strip() + status = p.wait() if status != 0: - raise CoreCommandError(status, input_args, stdout, stderr) + raise CoreCommandError(status, args, stdout, stderr) return stdout else: return "" except OSError as e: - logger.error("cmd error: %s", e.strerror) - raise CoreCommandError(1, input_args, "", e.strerror) - - -def run_cmds(args: list[str], wait: bool = True, shell: bool = False) -> list[str]: - """ - Execute a series of commands on the host and returns a list of the combined stderr - stdout output. - - :param args: command arguments - :param wait: True to wait for status, False otherwise - :param shell: True to use shell, False otherwise - :return: combined stdout and stderr - :raises CoreCommandError: when there is a non-zero exit status or the file to - execute is not found - """ - outputs = [] - for arg in args: - output = cmd(arg, wait=wait, shell=shell) - outputs.append(output) - return outputs + logging.error("cmd error: %s", e.strerror) + raise CoreCommandError(1, args, "", e.strerror) def file_munge(pathname: str, header: str, text: str) -> None: @@ -282,7 +260,7 @@ def file_demunge(pathname: str, header: str) -> None: :param header: header text to target for removal :return: nothing """ - with open(pathname) as read_file: + with open(pathname, "r") as read_file: lines = read_file.readlines() start = None @@ -304,7 +282,7 @@ def file_demunge(pathname: str, header: str) -> None: def expand_corepath( pathname: str, session: "Session" = None, node: "CoreNode" = None -) -> Path: +) -> str: """ Expand a file path given session information. @@ -316,12 +294,14 @@ def expand_corepath( if session is not None: pathname = pathname.replace("~", f"/home/{session.user}") pathname = pathname.replace("%SESSION%", str(session.id)) - pathname = pathname.replace("%SESSION_DIR%", str(session.directory)) + pathname = pathname.replace("%SESSION_DIR%", session.session_dir) pathname = pathname.replace("%SESSION_USER%", session.user) + if node is not None: pathname = pathname.replace("%NODE%", str(node.id)) pathname = pathname.replace("%NODENAME%", node.name) - return Path(pathname) + + return pathname def sysctl_devname(devname: str) -> Optional[str]: @@ -336,7 +316,7 @@ def sysctl_devname(devname: str) -> Optional[str]: return devname.replace(".", "/") -def load_config(file_path: Path, d: dict[str, str]) -> None: +def load_config(file_path: Path, d: Dict[str, str]) -> None: """ Read key=value pairs from a file, into a dict. Skip comments; strip newline characters and spacing. @@ -354,25 +334,10 @@ def load_config(file_path: Path, d: dict[str, str]) -> None: key, value = line.split("=", 1) d[key] = value.strip() except ValueError: - logger.exception("error reading file to dict: %s", file_path) + logging.exception("error reading file to dict: %s", file_path) -def load_module(import_statement: str, clazz: Generic[T]) -> list[T]: - classes = [] - try: - module = importlib.import_module(import_statement) - members = inspect.getmembers(module, lambda x: _is_class(module, x, clazz)) - for member in members: - valid_class = member[1] - classes.append(valid_class) - except Exception: - logger.exception( - "unexpected error during import, skipping: %s", import_statement - ) - return classes - - -def load_classes(path: Path, clazz: Generic[T]) -> list[T]: +def load_classes(path: str, clazz: Generic[T]) -> T: """ Dynamically load classes for use within CORE. @@ -381,132 +346,55 @@ def load_classes(path: Path, clazz: Generic[T]) -> list[T]: :return: list of classes loaded """ # validate path exists - logger.debug("attempting to load modules from path: %s", path) - if not path.is_dir(): - logger.warning("invalid custom module directory specified" ": %s", path) + logging.debug("attempting to load modules from path: %s", path) + if not os.path.isdir(path): + logging.warning("invalid custom module directory specified" ": %s", path) # check if path is in sys.path - parent = str(path.parent) - if parent not in sys.path: - logger.debug("adding parent path to allow imports: %s", parent) - sys.path.append(parent) + parent_path = os.path.dirname(path) + if parent_path not in sys.path: + logging.debug("adding parent path to allow imports: %s", parent_path) + sys.path.append(parent_path) + + # retrieve potential service modules, and filter out invalid modules + base_module = os.path.basename(path) + module_names = os.listdir(path) + module_names = filter(lambda x: _valid_module(path, x), module_names) + module_names = map(lambda x: x[:-3], module_names) + # import and add all service modules in the path classes = [] - for p in path.iterdir(): - if not _valid_module(p): - continue - import_statement = f"{path.name}.{p.stem}" - logger.debug("importing custom module: %s", import_statement) - loaded = load_module(import_statement, clazz) - classes.extend(loaded) + for module_name in module_names: + import_statement = f"{base_module}.{module_name}" + logging.debug("importing custom module: %s", import_statement) + try: + module = importlib.import_module(import_statement) + members = inspect.getmembers(module, lambda x: _is_class(module, x, clazz)) + for member in members: + valid_class = member[1] + classes.append(valid_class) + except Exception: + logging.exception( + "unexpected error during import, skipping: %s", import_statement + ) + return classes -def load_logging_config(config_path: Path) -> None: +def load_logging_config(config_path: str) -> None: """ Load CORE logging configuration file. :param config_path: path to logging config file :return: nothing """ - with config_path.open("r") as f: - log_config = json.load(f) - logging.config.dictConfig(log_config) - - -def run_cmds_threaded( - node_cmds: list[tuple["CoreNode", list[str]]], - wait: bool = True, - shell: bool = False, - workers: int = None, -) -> tuple[dict[int, list[str]], list[Exception]]: - """ - Run the set of commands for the node provided. Each node will - run the commands within the context of a threadpool. - - :param node_cmds: list of tuples of nodes and commands to run within them - :param wait: True to wait for status, False otherwise - :param shell: True to run shell like, False otherwise - :param workers: number of workers for threadpool, uses library default otherwise - :return: tuple including dict of node id to list of command output and a list of - exceptions if any - """ - - def _node_cmds( - _target: "CoreNode", _cmds: list[str], _wait: bool, _shell: bool - ) -> list[str]: - cmd_outputs = [] - for _cmd in _cmds: - output = _target.cmd(_cmd, wait=_wait, shell=_shell) - cmd_outputs.append(output) - return cmd_outputs - - with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as executor: - futures = [] - node_mappings = {} - for node, cmds in node_cmds: - future = executor.submit(_node_cmds, node, cmds, wait, shell) - node_mappings[future] = node - futures.append(future) - outputs = {} - exceptions = [] - for future in concurrent.futures.as_completed(futures): - try: - result = future.result() - node = node_mappings[future] - outputs[node.id] = result - except Exception as e: - logger.exception("thread pool exception") - exceptions.append(e) - return outputs, exceptions - - -def run_cmds_mp( - node_cmds: list[tuple["CoreNode", list[str]]], - wait: bool = True, - shell: bool = False, - workers: int = None, -) -> tuple[dict[int, list[str]], list[Exception]]: - """ - Run the set of commands for the node provided. Each node will - run the commands within the context of a process pool. This will not work - for distributed nodes and throws an exception when encountered. - - :param node_cmds: list of tuples of nodes and commands to run within them - :param wait: True to wait for status, False otherwise - :param shell: True to run shell like, False otherwise - :param workers: number of workers for threadpool, uses library default otherwise - :return: tuple including dict of node id to list of command output and a list of - exceptions if any - :raises CoreError: when a distributed node is provided as input - """ - with concurrent.futures.ProcessPoolExecutor(max_workers=workers) as executor: - futures = [] - node_mapping = {} - for node, cmds in node_cmds: - node_cmds = [node.create_cmd(x) for x in cmds] - if node.server: - raise CoreError( - f"{node.name} uses a distributed server and not supported" - ) - future = executor.submit(run_cmds, node_cmds, wait=wait, shell=shell) - node_mapping[future] = node - futures.append(future) - exceptions = [] - outputs = {} - for future in concurrent.futures.as_completed(futures): - try: - result = future.result() - node = node_mapping[future] - outputs[node.id] = result - except Exception as e: - logger.exception("thread pool exception") - exceptions.append(e) - return outputs, exceptions + with open(config_path, "r") as log_config_file: + log_config = json.load(log_config_file) + logging.config.dictConfig(log_config) def threadpool( - funcs: list[tuple[Callable, Iterable[Any], dict[Any, Any]]], workers: int = 10 -) -> tuple[list[Any], list[Exception]]: + funcs: List[Tuple[Callable, Iterable[Any], Dict[Any, Any]]], workers: int = 10 +) -> Tuple[List[Any], List[Exception]]: """ Run provided functions, arguments, and keywords within a threadpool collecting results and exceptions. @@ -527,7 +415,7 @@ def threadpool( result = future.result() results.append(result) except Exception as e: - logger.exception("thread pool exception") + logging.exception("thread pool exception") exceptions.append(e) return results, exceptions @@ -559,7 +447,7 @@ def iface_config_id(node_id: int, iface_id: int = None) -> int: return node_id -def parse_iface_config_id(config_id: int) -> tuple[int, Optional[int]]: +def parse_iface_config_id(config_id: int) -> Tuple[int, Optional[int]]: """ Parses configuration id, that may be potentially derived from an interface for a node. @@ -573,19 +461,3 @@ def parse_iface_config_id(config_id: int) -> tuple[int, Optional[int]]: iface_id = config_id % IFACE_CONFIG_FACTOR node_id = config_id // IFACE_CONFIG_FACTOR return node_id, iface_id - - -class SetQueue(Queue): - """ - Set backed queue to avoid duplicate submissions. - """ - - def _init(self, maxsize): - self.queue: OrderedDict = OrderedDict() - - def _put(self, item): - self.queue[item] = None - - def _get(self): - key, _ = self.queue.popitem(last=False) - return key diff --git a/daemon/core/xml/corexml.py b/daemon/core/xml/corexml.py index d566b501..667ebae8 100644 --- a/daemon/core/xml/corexml.py +++ b/daemon/core/xml/corexml.py @@ -1,38 +1,31 @@ import logging -from pathlib import Path -from typing import TYPE_CHECKING, Any, Generic, Optional, TypeVar +from typing import TYPE_CHECKING, Any, Dict, Generic, List, Optional, Type, TypeVar from lxml import etree import core.nodes.base import core.nodes.physical from core import utils -from core.config import Configuration -from core.emane.nodes import EmaneNet, EmaneOptions -from core.emulator.data import InterfaceData, LinkOptions +from core.emane.nodes import EmaneNet +from core.emulator.data import InterfaceData, LinkData, LinkOptions, NodeOptions from core.emulator.enumerations import EventTypes, NodeTypes from core.errors import CoreXmlError -from core.nodes.base import CoreNodeBase, CoreNodeOptions, NodeBase, Position -from core.nodes.docker import DockerNode, DockerOptions -from core.nodes.interface import CoreInterface -from core.nodes.lxd import LxcNode, LxcOptions -from core.nodes.network import CtrlNet, GreTapBridge, PtpNet, WlanNode -from core.nodes.podman import PodmanNode, PodmanOptions -from core.nodes.wireless import WirelessNode +from core.nodes.base import CoreNodeBase, NodeBase +from core.nodes.docker import DockerNode +from core.nodes.lxd import LxcNode +from core.nodes.network import CtrlNet, WlanNode from core.services.coreservices import CoreService -logger = logging.getLogger(__name__) - if TYPE_CHECKING: from core.emane.emanemodel import EmaneModel from core.emulator.session import Session - EmaneModelType = type[EmaneModel] + EmaneModelType = Type[EmaneModel] T = TypeVar("T") def write_xml_file( - xml_element: etree.Element, file_path: Path, doctype: str = None + xml_element: etree.Element, file_path: str, doctype: str = None ) -> None: xml_data = etree.tostring( xml_element, @@ -41,8 +34,8 @@ def write_xml_file( encoding="UTF-8", doctype=doctype, ) - with file_path.open("wb") as f: - f.write(xml_data) + with open(file_path, "wb") as xml_file: + xml_file.write(xml_data) def get_type(element: etree.Element, name: str, _type: Generic[T]) -> Optional[T]: @@ -84,32 +77,46 @@ def create_iface_data(iface_element: etree.Element) -> InterfaceData: ) +def create_emane_config(session: "Session") -> etree.Element: + emane_configuration = etree.Element("emane_global_configuration") + config = session.emane.get_configs() + emulator_element = etree.SubElement(emane_configuration, "emulator") + for emulator_config in session.emane.emane_config.emulator_config: + value = config[emulator_config.id] + add_configuration(emulator_element, emulator_config.id, value) + core_element = etree.SubElement(emane_configuration, "core") + for core_config in session.emane.emane_config.core_config: + value = config[core_config.id] + add_configuration(core_element, core_config.id, value) + return emane_configuration + + def create_emane_model_config( node_id: int, model: "EmaneModelType", - config: dict[str, str], + config: Dict[str, str], iface_id: Optional[int], ) -> etree.Element: emane_element = etree.Element("emane_configuration") add_attribute(emane_element, "node", node_id) add_attribute(emane_element, "iface", iface_id) add_attribute(emane_element, "model", model.name) - platform_element = etree.SubElement(emane_element, "platform") - for platform_config in model.platform_config: - value = config[platform_config.id] - add_configuration(platform_element, platform_config.id, value) + mac_element = etree.SubElement(emane_element, "mac") for mac_config in model.mac_config: value = config[mac_config.id] add_configuration(mac_element, mac_config.id, value) + phy_element = etree.SubElement(emane_element, "phy") for phy_config in model.phy_config: value = config[phy_config.id] add_configuration(phy_element, phy_config.id, value) + external_element = etree.SubElement(emane_element, "external") for external_config in model.external_config: value = config[external_config.id] add_configuration(external_element, external_config.id, value) + return emane_element @@ -149,8 +156,8 @@ class NodeElement: class ServiceElement: - def __init__(self, service: type[CoreService]) -> None: - self.service: type[CoreService] = service + def __init__(self, service: Type[CoreService]) -> None: + self.service: Type[CoreService] = service self.element: etree.Element = etree.Element("service") add_attribute(self.element, "name", service.name) self.add_directories() @@ -213,7 +220,7 @@ class ServiceElement: class DeviceElement(NodeElement): def __init__(self, session: "Session", node: NodeBase) -> None: super().__init__(session, node, "device") - add_attribute(self.element, "type", node.model) + add_attribute(self.element, "type", node.type) self.add_class() self.add_services() @@ -226,9 +233,6 @@ class DeviceElement(NodeElement): elif isinstance(self.node, LxcNode): clazz = "lxc" image = self.node.image - elif isinstance(self.node, PodmanNode): - clazz = "podman" - image = self.node.image add_attribute(self.element, "class", clazz) add_attribute(self.element, "image", image) @@ -249,31 +253,23 @@ class DeviceElement(NodeElement): class NetworkElement(NodeElement): def __init__(self, session: "Session", node: NodeBase) -> None: super().__init__(session, node, "network") - if isinstance(self.node, WlanNode): - if self.node.wireless_model: - add_attribute(self.element, "model", self.node.wireless_model.name) - if self.node.mobility: - add_attribute(self.element, "mobility", self.node.mobility.name) - if isinstance(self.node, EmaneNet): - if self.node.wireless_model: - add_attribute(self.element, "model", self.node.wireless_model.name) - if self.node.mobility: - add_attribute(self.element, "mobility", self.node.mobility.name) - if isinstance(self.node, GreTapBridge): - add_attribute(self.element, "grekey", self.node.grekey) - if isinstance(self.node, WirelessNode): - config = self.node.get_config() - self.add_wireless_config(config) + model = getattr(self.node, "model", None) + if model: + add_attribute(self.element, "model", model.name) + mobility = getattr(self.node, "mobility", None) + if mobility: + add_attribute(self.element, "mobility", mobility.name) + grekey = getattr(self.node, "grekey", None) + if grekey and grekey is not None: + add_attribute(self.element, "grekey", grekey) self.add_type() def add_type(self) -> None: - node_type = self.session.get_node_type(type(self.node)) - add_attribute(self.element, "type", node_type.name) - - def add_wireless_config(self, config: dict[str, Configuration]) -> None: - wireless_element = etree.SubElement(self.element, "wireless") - for config_item in config.values(): - add_configuration(wireless_element, config_item.id, config_item.default) + if self.node.apitype: + node_type = self.node.apitype.name + else: + node_type = self.node.__class__.__name__ + add_attribute(self.element, "type", node_type) class CoreXmlWriter: @@ -286,8 +282,8 @@ class CoreXmlWriter: def write_session(self) -> None: # generate xml content - self.write_nodes() - self.write_links() + links = self.write_nodes() + self.write_links(links) self.write_mobility_configs() self.write_emane_configs() self.write_service_configs() @@ -299,12 +295,13 @@ class CoreXmlWriter: self.write_session_metadata() self.write_default_services() - def write(self, path: Path) -> None: - self.scenario.set("name", str(path)) + def write(self, file_name: str) -> None: + self.scenario.set("name", file_name) + # write out generated xml xml_tree = etree.ElementTree(self.scenario) xml_tree.write( - str(path), xml_declaration=True, pretty_print=True, encoding="UTF-8" + file_name, xml_declaration=True, pretty_print=True, encoding="UTF-8" ) def write_session_origin(self) -> None: @@ -351,9 +348,16 @@ class CoreXmlWriter: def write_session_options(self) -> None: option_elements = etree.Element("session_options") - for option in self.session.options.options: - value = self.session.options.get(option.id) - add_configuration(option_elements, option.id, value) + options_config = self.session.options.get_configs() + if not options_config: + return + + default_options = self.session.options.default_values() + for _id in default_options: + default_value = default_options[_id] + value = options_config.get(_id, default_value) + add_configuration(option_elements, _id, value) + if option_elements.getchildren(): self.scenario.append(option_elements) @@ -372,16 +376,22 @@ class CoreXmlWriter: self.scenario.append(metadata_elements) def write_emane_configs(self) -> None: + emane_global_configuration = create_emane_config(self.session) + self.scenario.append(emane_global_configuration) emane_configurations = etree.Element("emane_configurations") - for node_id, model_configs in self.session.emane.node_configs.items(): + for node_id in self.session.emane.nodes(): + all_configs = self.session.emane.get_all_configs(node_id) + if not all_configs: + continue node_id, iface_id = utils.parse_iface_config_id(node_id) - for model_name, config in model_configs.items(): - logger.debug( + for model_name in all_configs: + config = all_configs[model_name] + logging.debug( "writing emane config node(%s) model(%s)", node_id, model_name ) - model_class = self.session.emane.get_model(model_name) + model = self.session.emane.models[model_name] emane_configuration = create_emane_model_config( - node_id, model_class, config, iface_id + node_id, model, config, iface_id ) emane_configurations.append(emane_configuration) if emane_configurations.getchildren(): @@ -396,7 +406,7 @@ class CoreXmlWriter: for model_name in all_configs: config = all_configs[model_name] - logger.debug( + logging.debug( "writing mobility config node(%s) model(%s)", node_id, model_name ) mobility_configuration = etree.SubElement( @@ -449,48 +459,52 @@ class CoreXmlWriter: self.scenario.append(service_configurations) def write_default_services(self) -> None: - models = etree.Element("default_services") - for model in self.session.services.default_services: - services = self.session.services.default_services[model] - model = etree.SubElement(models, "node", type=model) + node_types = etree.Element("default_services") + for node_type in self.session.services.default_services: + services = self.session.services.default_services[node_type] + node_type = etree.SubElement(node_types, "node", type=node_type) for service in services: - etree.SubElement(model, "service", name=service) - if models.getchildren(): - self.scenario.append(models) + etree.SubElement(node_type, "service", name=service) - def write_nodes(self) -> None: - for node in self.session.nodes.values(): + if node_types.getchildren(): + self.scenario.append(node_types) + + def write_nodes(self) -> List[LinkData]: + links = [] + for node_id in self.session.nodes: + node = self.session.nodes[node_id] # network node is_network_or_rj45 = isinstance( node, (core.nodes.base.CoreNetworkBase, core.nodes.physical.Rj45Node) ) is_controlnet = isinstance(node, CtrlNet) - is_ptp = isinstance(node, PtpNet) - if is_network_or_rj45 and not (is_controlnet or is_ptp): + if is_network_or_rj45 and not is_controlnet: self.write_network(node) # device node elif isinstance(node, core.nodes.base.CoreNodeBase): self.write_device(node) + # add known links + links.extend(node.links()) + return links + def write_network(self, node: NodeBase) -> None: + # ignore p2p and other nodes that are not part of the api + if not node.apitype: + return + network = NetworkElement(self.session, node) self.networks.append(network.element) - def write_links(self) -> None: + def write_links(self, links: List[LinkData]) -> None: link_elements = etree.Element("links") - for core_link in self.session.link_manager.links(): - node1, iface1 = core_link.node1, core_link.iface1 - node2, iface2 = core_link.node2, core_link.iface2 - unidirectional = core_link.is_unidirectional() - link_element = self.create_link_element( - node1, iface1, node2, iface2, core_link.options(), unidirectional - ) + # add link data + for link_data in links: + # skip basic range links + if link_data.iface1 is None and link_data.iface2 is None: + continue + link_element = self.create_link_element(link_data) link_elements.append(link_element) - if unidirectional: - link_element = self.create_link_element( - node2, iface2, node1, iface1, iface2.options, unidirectional - ) - link_elements.append(link_element) if link_elements.getchildren(): self.scenario.append(link_elements) @@ -499,71 +513,67 @@ class CoreXmlWriter: self.devices.append(device.element) def create_iface_element( - self, element_name: str, iface: CoreInterface + self, element_name: str, node_id: int, iface_data: InterfaceData ) -> etree.Element: iface_element = etree.Element(element_name) - # check if interface if connected to emane - if isinstance(iface.node, CoreNodeBase) and isinstance(iface.net, EmaneNet): - nem_id = self.session.emane.get_nem_id(iface) - add_attribute(iface_element, "nem", nem_id) - ip4 = iface.get_ip4() - ip4_mask = None - if ip4: - ip4_mask = ip4.prefixlen - ip4 = str(ip4.ip) - ip6 = iface.get_ip6() - ip6_mask = None - if ip6: - ip6_mask = ip6.prefixlen - ip6 = str(ip6.ip) - add_attribute(iface_element, "id", iface.id) - add_attribute(iface_element, "name", iface.name) - add_attribute(iface_element, "mac", iface.mac) - add_attribute(iface_element, "ip4", ip4) - add_attribute(iface_element, "ip4_mask", ip4_mask) - add_attribute(iface_element, "ip6", ip6) - add_attribute(iface_element, "ip6_mask", ip6_mask) + node = self.session.get_node(node_id, NodeBase) + if isinstance(node, CoreNodeBase): + iface = node.get_iface(iface_data.id) + # check if emane interface + if isinstance(iface.net, EmaneNet): + nem_id = self.session.emane.get_nem_id(iface) + add_attribute(iface_element, "nem", nem_id) + add_attribute(iface_element, "id", iface_data.id) + add_attribute(iface_element, "name", iface_data.name) + add_attribute(iface_element, "mac", iface_data.mac) + add_attribute(iface_element, "ip4", iface_data.ip4) + add_attribute(iface_element, "ip4_mask", iface_data.ip4_mask) + add_attribute(iface_element, "ip6", iface_data.ip6) + add_attribute(iface_element, "ip6_mask", iface_data.ip6_mask) return iface_element - def create_link_element( - self, - node1: NodeBase, - iface1: Optional[CoreInterface], - node2: NodeBase, - iface2: Optional[CoreInterface], - options: LinkOptions, - unidirectional: bool, - ) -> etree.Element: + def create_link_element(self, link_data: LinkData) -> etree.Element: link_element = etree.Element("link") - add_attribute(link_element, "node1", node1.id) - add_attribute(link_element, "node2", node2.id) + add_attribute(link_element, "node1", link_data.node1_id) + add_attribute(link_element, "node2", link_data.node2_id) + # check for interface one - if iface1 is not None: - iface1 = self.create_iface_element("iface1", iface1) + if link_data.iface1 is not None: + iface1 = self.create_iface_element( + "iface1", link_data.node1_id, link_data.iface1 + ) link_element.append(iface1) + # check for interface two - if iface2 is not None: - iface2 = self.create_iface_element("iface2", iface2) + if link_data.iface2 is not None: + iface2 = self.create_iface_element( + "iface2", link_data.node2_id, link_data.iface2 + ) link_element.append(iface2) + # check for options, don't write for emane/wlan links - is_node1_wireless = isinstance(node1, (WlanNode, EmaneNet, WirelessNode)) - is_node2_wireless = isinstance(node2, (WlanNode, EmaneNet, WirelessNode)) - if not (is_node1_wireless or is_node2_wireless): - unidirectional = 1 if unidirectional else 0 - options_element = etree.Element("options") - add_attribute(options_element, "delay", options.delay) - add_attribute(options_element, "bandwidth", options.bandwidth) - add_attribute(options_element, "loss", options.loss) - add_attribute(options_element, "dup", options.dup) - add_attribute(options_element, "jitter", options.jitter) - add_attribute(options_element, "mer", options.mer) - add_attribute(options_element, "burst", options.burst) - add_attribute(options_element, "mburst", options.mburst) - add_attribute(options_element, "unidirectional", unidirectional) - add_attribute(options_element, "key", options.key) - add_attribute(options_element, "buffer", options.buffer) - if options_element.items(): - link_element.append(options_element) + node1 = self.session.get_node(link_data.node1_id, NodeBase) + node2 = self.session.get_node(link_data.node2_id, NodeBase) + is_node1_wireless = isinstance(node1, (WlanNode, EmaneNet)) + is_node2_wireless = isinstance(node2, (WlanNode, EmaneNet)) + if not any([is_node1_wireless, is_node2_wireless]): + options_data = link_data.options + options = etree.Element("options") + add_attribute(options, "delay", options_data.delay) + add_attribute(options, "bandwidth", options_data.bandwidth) + add_attribute(options, "loss", options_data.loss) + add_attribute(options, "dup", options_data.dup) + add_attribute(options, "jitter", options_data.jitter) + add_attribute(options, "mer", options_data.mer) + add_attribute(options, "burst", options_data.burst) + add_attribute(options, "mburst", options_data.mburst) + add_attribute(options, "unidirectional", options_data.unidirectional) + add_attribute(options, "network_id", link_data.network_id) + add_attribute(options, "key", options_data.key) + add_attribute(options, "buffer", options_data.buffer) + if options.items(): + link_element.append(options) + return link_element @@ -572,8 +582,8 @@ class CoreXmlReader: self.session: "Session" = session self.scenario: Optional[etree.ElementTree] = None - def read(self, file_path: Path) -> None: - xml_tree = etree.parse(str(file_path)) + def read(self, file_name: str) -> None: + xml_tree = etree.parse(file_name) self.scenario = xml_tree.getroot() # read xml session content @@ -585,6 +595,7 @@ class CoreXmlReader: self.read_session_origin() self.read_service_configs() self.read_mobility_configs() + self.read_emane_global_config() self.read_nodes() self.read_links() self.read_emane_configs() @@ -596,12 +607,14 @@ class CoreXmlReader: return for node in default_services.iterchildren(): - model = node.get("type") + node_type = node.get("type") services = [] for service in node.iterchildren(): services.append(service.get("name")) - logger.info("reading default services for nodes(%s): %s", model, services) - self.session.services.default_services[model] = services + logging.info( + "reading default services for nodes(%s): %s", node_type, services + ) + self.session.services.default_services[node_type] = services def read_session_metadata(self) -> None: session_metadata = self.scenario.find("session_metadata") @@ -613,7 +626,7 @@ class CoreXmlReader: name = data.get("name") value = data.get("value") configs[name] = value - logger.info("reading session metadata: %s", configs) + logging.info("reading session metadata: %s", configs) self.session.metadata = configs def read_session_options(self) -> None: @@ -625,8 +638,9 @@ class CoreXmlReader: name = configuration.get("name") value = configuration.get("value") xml_config[name] = value - logger.info("reading session options: %s", xml_config) - self.session.options.update(xml_config) + logging.info("reading session options: %s", xml_config) + config = self.session.options.get_configs() + config.update(xml_config) def read_session_hooks(self) -> None: session_hooks = self.scenario.find("session_hooks") @@ -638,7 +652,7 @@ class CoreXmlReader: state = get_int(hook, "state") state = EventTypes(state) data = hook.text - logger.info("reading hook: state(%s) name(%s)", state, name) + logging.info("reading hook: state(%s) name(%s)", state, name) self.session.add_hook(state, name, data) def read_servers(self) -> None: @@ -648,7 +662,7 @@ class CoreXmlReader: for server in servers.iterchildren(): name = server.get("name") address = server.get("address") - logger.info("reading server: name(%s) address(%s)", name, address) + logging.info("reading server: name(%s) address(%s)", name, address) self.session.distributed.add_server(name, address) def read_session_origin(self) -> None: @@ -660,19 +674,19 @@ class CoreXmlReader: lon = get_float(session_origin, "lon") alt = get_float(session_origin, "alt") if all([lat, lon, alt]): - logger.info("reading session reference geo: %s, %s, %s", lat, lon, alt) + logging.info("reading session reference geo: %s, %s, %s", lat, lon, alt) self.session.location.setrefgeo(lat, lon, alt) scale = get_float(session_origin, "scale") if scale: - logger.info("reading session reference scale: %s", scale) + logging.info("reading session reference scale: %s", scale) self.session.location.refscale = scale x = get_float(session_origin, "x") y = get_float(session_origin, "y") z = get_float(session_origin, "z") if all([x, y]): - logger.info("reading session reference xyz: %s, %s, %s", x, y, z) + logging.info("reading session reference xyz: %s, %s, %s", x, y, z) self.session.location.refxyz = (x, y, z) def read_service_configs(self) -> None: @@ -683,7 +697,7 @@ class CoreXmlReader: for service_configuration in service_configurations.iterchildren(): node_id = get_int(service_configuration, "node") service_name = service_configuration.get("name") - logger.info( + logging.info( "reading custom service(%s) for node(%s)", service_name, node_id ) self.session.services.set_service(node_id, service_name) @@ -719,10 +733,28 @@ class CoreXmlReader: files.add(name) service.configs = tuple(files) + def read_emane_global_config(self) -> None: + emane_global_configuration = self.scenario.find("emane_global_configuration") + if emane_global_configuration is None: + return + emulator_configuration = emane_global_configuration.find("emulator") + configs = {} + for config in emulator_configuration.iterchildren(): + name = config.get("name") + value = config.get("value") + configs[name] = value + core_configuration = emane_global_configuration.find("core") + for config in core_configuration.iterchildren(): + name = config.get("name") + value = config.get("value") + configs[name] = value + self.session.emane.set_configs(config=configs) + def read_emane_configs(self) -> None: emane_configurations = self.scenario.find("emane_configurations") if emane_configurations is None: return + for emane_configuration in emane_configurations.iterchildren(): node_id = get_int(emane_configuration, "node") iface_id = get_int(emane_configuration, "iface") @@ -733,39 +765,38 @@ class CoreXmlReader: node = self.session.nodes.get(node_id) if not node: raise CoreXmlError(f"node for emane config doesn't exist: {node_id}") - self.session.emane.get_model(model_name) + model = self.session.emane.models.get(model_name) + if not model: + raise CoreXmlError(f"invalid emane model: {model_name}") if iface_id is not None and iface_id not in node.ifaces: raise CoreXmlError( f"invalid interface id({iface_id}) for node({node.name})" ) # read and set emane model configuration - platform_configuration = emane_configuration.find("platform") - for config in platform_configuration.iterchildren(): - name = config.get("name") - value = config.get("value") - configs[name] = value mac_configuration = emane_configuration.find("mac") for config in mac_configuration.iterchildren(): name = config.get("name") value = config.get("value") configs[name] = value + phy_configuration = emane_configuration.find("phy") for config in phy_configuration.iterchildren(): name = config.get("name") value = config.get("value") configs[name] = value + external_configuration = emane_configuration.find("external") for config in external_configuration.iterchildren(): name = config.get("name") value = config.get("value") configs[name] = value - logger.info( + logging.info( "reading emane configuration node(%s) model(%s)", node_id, model_name ) node_id = utils.iface_config_id(node_id, iface_id) - self.session.emane.set_config(node_id, model_name, configs) + self.session.emane.set_model_config(node_id, model_name, configs) def read_mobility_configs(self) -> None: mobility_configurations = self.scenario.find("mobility_configurations") @@ -782,7 +813,7 @@ class CoreXmlReader: value = config.get("value") configs[name] = value - logger.info( + logging.info( "reading mobility configuration node(%s) model(%s)", node_id, model_name ) self.session.mobility.set_model_config(node_id, model_name, configs) @@ -806,87 +837,71 @@ class CoreXmlReader: clazz = device_element.get("class") image = device_element.get("image") server = device_element.get("server") - canvas = get_int(device_element, "canvas") + options = NodeOptions( + name=name, model=model, image=image, icon=icon, server=server + ) node_type = NodeTypes.DEFAULT if clazz == "docker": node_type = NodeTypes.DOCKER elif clazz == "lxc": node_type = NodeTypes.LXC - elif clazz == "podman": - node_type = NodeTypes.PODMAN _class = self.session.get_node_class(node_type) - options = _class.create_options() - options.icon = icon - options.canvas = canvas - # check for special options - if isinstance(options, CoreNodeOptions): - options.model = model - service_elements = device_element.find("services") - if service_elements is not None: - options.services.extend( - x.get("name") for x in service_elements.iterchildren() - ) - config_service_elements = device_element.find("configservices") - if config_service_elements is not None: - options.config_services.extend( - x.get("name") for x in config_service_elements.iterchildren() - ) - if isinstance(options, (DockerOptions, LxcOptions, PodmanOptions)): - options.image = image - # get position information + + service_elements = device_element.find("services") + if service_elements is not None: + options.services = [x.get("name") for x in service_elements.iterchildren()] + + config_service_elements = device_element.find("configservices") + if config_service_elements is not None: + options.config_services = [ + x.get("name") for x in config_service_elements.iterchildren() + ] + position_element = device_element.find("position") - position = None if position_element is not None: - position = Position() x = get_float(position_element, "x") y = get_float(position_element, "y") if all([x, y]): - position.set(x, y) + options.set_position(x, y) + lat = get_float(position_element, "lat") lon = get_float(position_element, "lon") alt = get_float(position_element, "alt") if all([lat, lon, alt]): - position.set_geo(lon, lat, alt) - logger.info("reading node id(%s) model(%s) name(%s)", node_id, model, name) - self.session.add_node(_class, node_id, name, server, position, options) + options.set_location(lat, lon, alt) + + logging.info("reading node id(%s) model(%s) name(%s)", node_id, model, name) + self.session.add_node(_class, node_id, options) def read_network(self, network_element: etree.Element) -> None: node_id = get_int(network_element, "id") name = network_element.get("name") - server = network_element.get("server") node_type = NodeTypes[network_element.get("type")] _class = self.session.get_node_class(node_type) - options = _class.create_options() - options.canvas = get_int(network_element, "canvas") - options.icon = network_element.get("icon") - if isinstance(options, EmaneOptions): - options.emane_model = network_element.get("model") + icon = network_element.get("icon") + server = network_element.get("server") + options = NodeOptions(name=name, icon=icon, server=server) + if node_type == NodeTypes.EMANE: + model = network_element.get("model") + options.emane = model + position_element = network_element.find("position") - position = None if position_element is not None: - position = Position() x = get_float(position_element, "x") y = get_float(position_element, "y") if all([x, y]): - position.set(x, y) + options.set_position(x, y) + lat = get_float(position_element, "lat") lon = get_float(position_element, "lon") alt = get_float(position_element, "alt") if all([lat, lon, alt]): - position.set_geo(lon, lat, alt) - logger.info( + options.set_location(lat, lon, alt) + + logging.info( "reading node id(%s) node_type(%s) name(%s)", node_id, node_type, name ) - node = self.session.add_node(_class, node_id, name, server, position, options) - if isinstance(node, WirelessNode): - wireless_element = network_element.find("wireless") - if wireless_element: - config = {} - for config_element in wireless_element.iterchildren(): - name = config_element.get("name") - value = config_element.get("value") - config[name] = value - node.set_config(config) + self.session.add_node(_class, node_id, options) def read_configservice_configs(self) -> None: configservice_configs = self.scenario.find("configservice_configurations") @@ -913,7 +928,7 @@ class CoreXmlReader: for template_element in templates_element.iterchildren(): name = template_element.get("name") template = template_element.text - logger.info( + logging.info( "loading xml template(%s): %s", type(template), template ) service.set_template(name, template) @@ -965,12 +980,12 @@ class CoreXmlReader: options.buffer = get_int(options_element, "buffer") if options.unidirectional == 1 and node_set in node_sets: - logger.info("updating link node1(%s) node2(%s)", node1_id, node2_id) + logging.info("updating link node1(%s) node2(%s)", node1_id, node2_id) self.session.update_link( node1_id, node2_id, iface1_data.id, iface2_data.id, options ) else: - logger.info("adding link node1(%s) node2(%s)", node1_id, node2_id) + logging.info("adding link node1(%s) node2(%s)", node1_id, node2_id) self.session.add_link( node1_id, node2_id, iface1_data, iface2_data, options ) diff --git a/daemon/core/xml/corexmldeployment.py b/daemon/core/xml/corexmldeployment.py index 0b38e9b0..c062a1d2 100644 --- a/daemon/core/xml/corexmldeployment.py +++ b/daemon/core/xml/corexmldeployment.py @@ -1,6 +1,6 @@ import os import socket -from typing import TYPE_CHECKING +from typing import TYPE_CHECKING, List, Tuple import netaddr from lxml import etree @@ -78,7 +78,7 @@ def get_address_type(address: str) -> str: return address_type -def get_ipv4_addresses(hostname: str) -> list[tuple[str, str]]: +def get_ipv4_addresses(hostname: str) -> List[Tuple[str, str]]: if hostname == "localhost": addresses = [] args = f"{IP} -o -f inet address show" diff --git a/daemon/core/xml/emanexml.py b/daemon/core/xml/emanexml.py index 4b8ada70..c0d5462b 100644 --- a/daemon/core/xml/emanexml.py +++ b/daemon/core/xml/emanexml.py @@ -1,7 +1,7 @@ import logging -from pathlib import Path +import os from tempfile import NamedTemporaryFile -from typing import TYPE_CHECKING, Optional +from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple from lxml import etree @@ -12,17 +12,17 @@ from core.emulator.distributed import DistributedServer from core.errors import CoreError from core.nodes.base import CoreNode, CoreNodeBase from core.nodes.interface import CoreInterface +from core.nodes.network import CtrlNet from core.xml import corexml -logger = logging.getLogger(__name__) - if TYPE_CHECKING: + from core.emane.emanemanager import EmaneManager, StartData from core.emane.emanemodel import EmaneModel _MAC_PREFIX = "02:02" -def is_external(config: dict[str, str]) -> bool: +def is_external(config: Dict[str, str]) -> bool: """ Checks if the configuration is for an external transport. @@ -32,7 +32,7 @@ def is_external(config: dict[str, str]) -> bool: return config.get("external") == "1" -def _value_to_params(value: str) -> Optional[tuple[str]]: +def _value_to_params(value: str) -> Optional[Tuple[str]]: """ Helper to convert a parameter to a parameter tuple. @@ -47,14 +47,14 @@ def _value_to_params(value: str) -> Optional[tuple[str]]: return None return values except SyntaxError: - logger.exception("error in value string to param list") + logging.exception("error in value string to param list") return None def create_file( xml_element: etree.Element, doc_name: str, - file_path: Path, + file_path: str, server: DistributedServer = None, ) -> None: """ @@ -71,11 +71,10 @@ def create_file( ) if server: temp = NamedTemporaryFile(delete=False) - temp_path = Path(temp.name) - corexml.write_xml_file(xml_element, temp_path, doctype=doctype) + corexml.write_xml_file(xml_element, temp.name, doctype=doctype) temp.close() - server.remote_put(temp_path, file_path) - temp_path.unlink() + server.remote_put(temp.name, file_path) + os.unlink(temp.name) else: corexml.write_xml_file(xml_element, file_path, doctype=doctype) @@ -93,9 +92,9 @@ def create_node_file( :return: """ if isinstance(node, CoreNode): - file_path = node.directory / file_name + file_path = os.path.join(node.nodedir, file_name) else: - file_path = node.session.directory / file_name + file_path = os.path.join(node.session.session_dir, file_name) create_file(xml_element, doc_name, file_path, node.server) @@ -113,9 +112,9 @@ def add_param(xml_element: etree.Element, name: str, value: str) -> None: def add_configurations( xml_element: etree.Element, - configurations: list[Configuration], - config: dict[str, str], - config_ignore: set[str], + configurations: List[Configuration], + config: Dict[str, str], + config_ignore: Set, ) -> None: """ Add emane model configurations to xml element. @@ -144,72 +143,77 @@ def add_configurations( def build_platform_xml( - nem_id: int, - nem_port: int, - emane_net: EmaneNet, - iface: CoreInterface, - config: dict[str, str], + emane_manager: "EmaneManager", control_net: CtrlNet, data: "StartData" ) -> None: """ - Create platform xml for a nem/interface. + Create platform xml for a specific node. - :param nem_id: nem id for current node/interface - :param nem_port: control port to configure for emane - :param emane_net: emane network associate with node and interface - :param iface: node interface to create platform xml for - :param config: emane configuration for interface - :return: nothing + :param emane_manager: emane manager with emane + configurations + :param control_net: control net node for this emane + network + :param data: start data for a node connected to emane and associated interfaces + :return: the next nem id that can be used for creating platform xml files """ # create top level platform element + transport_configs = {"otamanagerdevice", "eventservicedevice"} platform_element = etree.Element("platform") - for configuration in emane_net.wireless_model.platform_config: + for configuration in emane_manager.emane_config.emulator_config: name = configuration.id - value = config[configuration.id] + if not isinstance(data.node, CoreNode) and name in transport_configs: + value = control_net.brname + else: + value = emane_manager.get_config(name) add_param(platform_element, name, value) - add_param( - platform_element, - emane_net.wireless_model.platform_controlport, - f"0.0.0.0:{nem_port}", - ) - # build nem xml - nem_definition = nem_file_name(iface) - nem_element = etree.Element( - "nem", id=str(nem_id), name=iface.localname, definition=nem_definition - ) + # create nem xml entries for all interfaces + for iface in data.ifaces: + emane_net = iface.net + if not isinstance(emane_net, EmaneNet): + raise CoreError( + f"emane interface not connected to emane net: {emane_net.name}" + ) + nem_id = emane_manager.next_nem_id() + emane_manager.set_nem(nem_id, iface) + emane_manager.write_nem(iface, nem_id) + config = emane_manager.get_iface_config(emane_net, iface) + emane_net.model.build_xml_files(config, iface) - # create model based xml files - emane_net.wireless_model.build_xml_files(config, iface) + # build nem xml + nem_definition = nem_file_name(iface) + nem_element = etree.Element( + "nem", id=str(nem_id), name=iface.localname, definition=nem_definition + ) - # check if this is an external transport - if is_external(config): - nem_element.set("transport", "external") - platform_endpoint = "platformendpoint" - add_param(nem_element, platform_endpoint, config[platform_endpoint]) - transport_endpoint = "transportendpoint" - add_param(nem_element, transport_endpoint, config[transport_endpoint]) + # check if this is an external transport + if is_external(config): + nem_element.set("transport", "external") + platform_endpoint = "platformendpoint" + add_param(nem_element, platform_endpoint, config[platform_endpoint]) + transport_endpoint = "transportendpoint" + add_param(nem_element, transport_endpoint, config[transport_endpoint]) - # define transport element - transport_name = transport_file_name(iface) - transport_element = etree.SubElement( - nem_element, "transport", definition=transport_name - ) - add_param(transport_element, "device", iface.name) + # define transport element + transport_name = transport_file_name(iface) + transport_element = etree.SubElement( + nem_element, "transport", definition=transport_name + ) + add_param(transport_element, "device", iface.name) - # add nem element to platform element - platform_element.append(nem_element) + # add nem element to platform element + platform_element.append(nem_element) - # generate and assign interface mac address based on nem id - mac = _MAC_PREFIX + ":00:00:" - mac += f"{(nem_id >> 8) & 0xFF:02X}:{nem_id & 0xFF:02X}" - iface.set_mac(mac) + # generate and assign interface mac address based on nem id + mac = _MAC_PREFIX + ":00:00:" + mac += f"{(nem_id >> 8) & 0xFF:02X}:{nem_id & 0xFF:02X}" + iface.set_mac(mac) doc_name = "platform" - file_name = platform_file_name(iface) - create_node_file(iface.node, platform_element, doc_name, file_name) + file_name = f"{data.node.name}-platform.xml" + create_node_file(data.node, platform_element, doc_name, file_name) -def create_transport_xml(iface: CoreInterface, config: dict[str, str]) -> None: +def create_transport_xml(iface: CoreInterface, config: Dict[str, str]) -> None: """ Build transport xml file for node and transport type. @@ -240,7 +244,7 @@ def create_transport_xml(iface: CoreInterface, config: dict[str, str]) -> None: def create_phy_xml( - emane_model: "EmaneModel", iface: CoreInterface, config: dict[str, str] + emane_model: "EmaneModel", iface: CoreInterface, config: Dict[str, str] ) -> None: """ Create the phy xml document. @@ -261,7 +265,7 @@ def create_phy_xml( def create_mac_xml( - emane_model: "EmaneModel", iface: CoreInterface, config: dict[str, str] + emane_model: "EmaneModel", iface: CoreInterface, config: Dict[str, str] ) -> None: """ Create the mac xml document. @@ -284,7 +288,7 @@ def create_mac_xml( def create_nem_xml( - emane_model: "EmaneModel", iface: CoreInterface, config: dict[str, str] + emane_model: "EmaneModel", iface: CoreInterface, config: Dict[str, str] ) -> None: """ Create the nem xml document. @@ -312,7 +316,7 @@ def create_event_service_xml( group: str, port: str, device: str, - file_directory: Path, + file_directory: str, server: DistributedServer = None, ) -> None: """ @@ -336,7 +340,8 @@ def create_event_service_xml( ): sub_element = etree.SubElement(event_element, name) sub_element.text = value - file_path = file_directory / "libemaneeventservice.xml" + file_name = "libemaneeventservice.xml" + file_path = os.path.join(file_directory, file_name) create_file(event_element, "emaneeventmsgsvc", file_path, server) @@ -389,7 +394,3 @@ def phy_file_name(iface: CoreInterface) -> str: :return: phy xml file name """ return f"{iface.name}-phy.xml" - - -def platform_file_name(iface: CoreInterface) -> str: - return f"{iface.name}-platform.xml" diff --git a/package/etc/core.conf b/daemon/data/core.conf similarity index 89% rename from package/etc/core.conf rename to daemon/data/core.conf index 1923250d..20ee5d1f 100644 --- a/package/etc/core.conf +++ b/daemon/data/core.conf @@ -1,17 +1,19 @@ [core-daemon] #distributed_address = 127.0.0.1 +listenaddr = localhost +port = 4038 grpcaddress = localhost grpcport = 50051 quagga_bin_search = "/usr/local/bin /usr/bin /usr/lib/quagga" quagga_sbin_search = "/usr/local/sbin /usr/sbin /usr/lib/quagga" frr_bin_search = "/usr/local/bin /usr/bin /usr/lib/frr" -frr_sbin_search = "/usr/local/sbin /usr/sbin /usr/lib/frr /usr/libexec/frr" +frr_sbin_search = "/usr/local/sbin /usr/sbin /usr/lib/frr" # uncomment the following line to load custom services from the specified dir # this may be a comma-separated list, and directory names should be unique # and not named 'services' -#custom_services_dir = /home//.coregui/custom_services -#custom_config_services_dir = /home//.coregui/custom_services +#custom_services_dir = /home/username/.core/myservices +#custom_config_services_dir = /home/username/.coregui/custom_services # uncomment to establish a standalone control backchannel for accessing nodes # (overriden by the session option of the same name) @@ -46,7 +48,7 @@ emane_platform_port = 8101 emane_transform_port = 8201 emane_event_generate = True emane_event_monitor = False -#emane_models_dir = /home//.coregui/custom_emane +#emane_models_dir = /home/username/.core/myemane # EMANE log level range [0,4] default: 2 #emane_log_level = 2 emane_realtime = True diff --git a/daemon/data/logging.conf b/daemon/data/logging.conf new file mode 100644 index 00000000..7f3d496f --- /dev/null +++ b/daemon/data/logging.conf @@ -0,0 +1,20 @@ +{ + "version": 1, + "handlers": { + "console": { + "class": "logging.StreamHandler", + "formatter": "default", + "level": "DEBUG", + "stream": "ext://sys.stdout" + } + }, + "formatters": { + "default": { + "format": "%(asctime)s - %(levelname)s - %(module)s:%(funcName)s - %(message)s" + } + }, + "root": { + "level": "INFO", + "handlers": ["console"] + } +} diff --git a/daemon/doc/Makefile.am b/daemon/doc/Makefile.am index 9ce90bfa..e46f7d32 100644 --- a/daemon/doc/Makefile.am +++ b/daemon/doc/Makefile.am @@ -1,4 +1,8 @@ # CORE +# (c)2012 the Boeing Company. +# See the LICENSE file included in this distribution. +# +# author: Jeff Ahrenholz # # Builds html and pdf documentation using Sphinx. # diff --git a/package/examples/configservices/switch.py b/daemon/examples/configservices/testing.py similarity index 91% rename from package/examples/configservices/switch.py rename to daemon/examples/configservices/testing.py index 937c3aa8..9706f2c9 100644 --- a/package/examples/configservices/switch.py +++ b/daemon/examples/configservices/testing.py @@ -1,7 +1,7 @@ import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode from core.nodes.network import SwitchNode @@ -11,13 +11,13 @@ if __name__ == "__main__": # setup basic network prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16") + options = NodeOptions(model=None) coreemu = CoreEmu() session = coreemu.create_session() session.set_state(EventTypes.CONFIGURATION_STATE) switch = session.add_node(SwitchNode) # node one - options = CoreNode.create_options() options.config_services = ["DefaultRoute", "IPForward"] node1 = session.add_node(CoreNode, options=options) interface = prefixes.create_iface(node1) diff --git a/package/examples/controlnet_updown b/daemon/examples/controlnet_updown similarity index 100% rename from package/examples/controlnet_updown rename to daemon/examples/controlnet_updown diff --git a/docs/docker.md b/daemon/examples/docker/README.md similarity index 59% rename from docs/docker.md rename to daemon/examples/docker/README.md index 562fd453..17c6cb90 100644 --- a/docs/docker.md +++ b/daemon/examples/docker/README.md @@ -1,39 +1,28 @@ -# Docker Node Support +# Docker Support -## Overview - -Provided below is some information for helping setup and use Docker -nodes within a CORE scenario. +Information on how Docker can be leveraged and included to create +nodes based on Docker containers and images to interface with +existing CORE nodes, when needed. ## Installation -### Debian Systems - ```shell sudo apt install docker.io ``` -### RHEL Systems - ## Configuration Custom configuration required to avoid iptable rules being added and removing the need for the default docker network, since core will be orchestrating connections between nodes. -Place the file below in **/etc/docker/docker.json** - -```json -{ - "bridge": "none", - "iptables": false -} -``` +Place the file below in **/etc/docker/** +* daemon.json ## Group Setup -To use Docker nodes within the python GUI, you will need to make sure the -user running the GUI is a member of the docker group. +To use Docker nodes within the python GUI, you will need to make sure the user running the GUI is a member of the +docker group. ```shell # add group if does not exist @@ -46,13 +35,20 @@ sudo usermod -aG docker $USER newgrp docker ``` -## Image Requirements +## Tools and Versions Tested With -Images used by Docker nodes in CORE need to have networking tools installed for -CORE to automate setup and configuration of the network within the container. +* Docker version 18.09.5, build e8ff056 +* nsenter from util-linux 2.31.1 + +## Examples + +This directory provides a few small examples creating Docker nodes +and linking them to themselves or with standard CORE nodes. + +Images used by nodes need to have networking tools installed for CORE to automate +setup and configuration of the container. Example Dockerfile: - ``` FROM ubuntu:latest RUN apt-get update @@ -60,12 +56,6 @@ RUN apt-get install -y iproute2 ethtool ``` Build image: - ```shell sudo docker build -t . ``` - -## Tools and Versions Tested With - -* Docker version 18.09.5, build e8ff056 -* nsenter from util-linux 2.31.1 diff --git a/daemon/examples/docker/daemon.json b/daemon/examples/docker/daemon.json new file mode 100644 index 00000000..8fefb9ab --- /dev/null +++ b/daemon/examples/docker/daemon.json @@ -0,0 +1,5 @@ +{ + "bridge": "none", + "iptables": false + +} diff --git a/package/examples/docker/docker2core.py b/daemon/examples/docker/docker2core.py similarity index 88% rename from package/examples/docker/docker2core.py rename to daemon/examples/docker/docker2core.py index cd6a5a20..ae7dae79 100644 --- a/package/examples/docker/docker2core.py +++ b/daemon/examples/docker/docker2core.py @@ -1,7 +1,7 @@ import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode from core.nodes.docker import DockerNode @@ -14,10 +14,9 @@ if __name__ == "__main__": try: prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16") + options = NodeOptions(model=None, image="ubuntu") # create node one - options = DockerNode.create_options() - options.image = "ubuntu" node1 = session.add_node(DockerNode, options=options) interface1_data = prefixes.create_iface(node1) diff --git a/package/examples/docker/docker2docker.py b/daemon/examples/docker/docker2docker.py similarity index 88% rename from package/examples/docker/docker2docker.py rename to daemon/examples/docker/docker2docker.py index 5fa65778..308fd00f 100644 --- a/package/examples/docker/docker2docker.py +++ b/daemon/examples/docker/docker2docker.py @@ -1,7 +1,7 @@ import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.docker import DockerNode @@ -15,10 +15,9 @@ if __name__ == "__main__": # create nodes and interfaces try: prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16") + options = NodeOptions(model=None, image="ubuntu") # create node one - options = DockerNode.create_options() - options.image = "ubuntu" node1 = session.add_node(DockerNode, options=options) interface1_data = prefixes.create_iface(node1) diff --git a/package/examples/docker/switch.py b/daemon/examples/docker/switch.py similarity index 86% rename from package/examples/docker/switch.py rename to daemon/examples/docker/switch.py index 3f696c56..fa9e4e40 100644 --- a/package/examples/docker/switch.py +++ b/daemon/examples/docker/switch.py @@ -1,7 +1,7 @@ import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode from core.nodes.docker import DockerNode @@ -10,19 +10,18 @@ from core.nodes.network import SwitchNode if __name__ == "__main__": logging.basicConfig(level=logging.DEBUG) - # create core session coreemu = CoreEmu() session = coreemu.create_session() session.set_state(EventTypes.CONFIGURATION_STATE) + try: prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16") + options = NodeOptions(model=None, image="ubuntu") # create switch switch = session.add_node(SwitchNode) # node one - options = DockerNode.create_options() - options.image = "ubuntu" node1 = session.add_node(DockerNode, options=options) interface1_data = prefixes.create_iface(node1) @@ -41,8 +40,6 @@ if __name__ == "__main__": # instantiate session.instantiate() - - print(f"{node2.name}: {node2.volumes.values()}") finally: input("continue to shutdown") coreemu.shutdown() diff --git a/daemon/core/scripts/__init__.py b/daemon/examples/grpc/__init__.py similarity index 100% rename from daemon/core/scripts/__init__.py rename to daemon/examples/grpc/__init__.py diff --git a/daemon/examples/grpc/distributed_switch.py b/daemon/examples/grpc/distributed_switch.py new file mode 100644 index 00000000..e8ddfb4c --- /dev/null +++ b/daemon/examples/grpc/distributed_switch.py @@ -0,0 +1,87 @@ +import argparse +import logging + +from core.api.grpc import client +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState + + +def log_event(event): + logging.info("event: %s", event) + + +def main(args): + core = client.CoreGrpcClient() + + with core.context_connect(): + # create session + response = core.create_session() + session_id = response.session_id + logging.info("created session: %s", response) + + # add distributed server + server_name = "core2" + response = core.add_session_server(session_id, server_name, args.server) + logging.info("added session server: %s", response) + + # handle events session may broadcast + core.events(session_id, log_event) + + # change session state + response = core.set_session_state(session_id, SessionState.CONFIGURATION) + logging.info("set session state: %s", response) + + # create switch node + switch = Node(type=NodeType.SWITCH) + response = core.add_node(session_id, switch) + logging.info("created switch: %s", response) + switch_id = response.node_id + + # helper to create interfaces + interface_helper = client.InterfaceHelper(ip4_prefix="10.83.0.0/16") + + # create node one + position = Position(x=100, y=50) + node = Node(position=position) + response = core.add_node(session_id, node) + logging.info("created node one: %s", response) + node1_id = response.node_id + + # create link + interface1 = interface_helper.create_iface(node1_id, 0) + response = core.add_link(session_id, node1_id, switch_id, interface1) + logging.info("created link from node one to switch: %s", response) + + # create node two + position = Position(x=200, y=50) + node = Node(position=position, server=server_name) + response = core.add_node(session_id, node) + logging.info("created node two: %s", response) + node2_id = response.node_id + + # create link + interface1 = interface_helper.create_iface(node2_id, 0) + response = core.add_link(session_id, node2_id, switch_id, interface1) + logging.info("created link from node two to switch: %s", response) + + # change session state + response = core.set_session_state(session_id, SessionState.INSTANTIATION) + logging.info("set session state: %s", response) + + +if __name__ == "__main__": + logging.basicConfig(level=logging.DEBUG) + parser = argparse.ArgumentParser(description="Run distributed_switch example") + parser.add_argument( + "-a", + "--address", + required=True, + help="local address that distributed servers will use for gre tunneling", + ) + parser.add_argument( + "-s", + "--server", + required=True, + help="distributed server to use for creating nodes", + ) + args = parser.parse_args() + main(args) diff --git a/daemon/examples/grpc/emane80211.py b/daemon/examples/grpc/emane80211.py new file mode 100644 index 00000000..ea3f5de0 --- /dev/null +++ b/daemon/examples/grpc/emane80211.py @@ -0,0 +1,51 @@ +# required imports +from core.api.grpc import client +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState +from core.emane.ieee80211abg import EmaneIeee80211abgModel + +# interface helper +iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") + +# create grpc client and connect +core = client.CoreGrpcClient() +core.connect() + +# create session and get id +response = core.create_session() +session_id = response.session_id + +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create emane node +position = Position(x=200, y=200) +emane = Node(type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name) +response = core.add_node(session_id, emane) +emane_id = response.node_id + +# create node one +position = Position(x=100, y=100) +n1 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two +position = Position(x=300, y=100) +n2 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n2) +n2_id = response.node_id + +# configure general emane settings +core.set_emane_config(session_id, {"eventservicettl": "2"}) + +# configure emane model settings +# using a dict mapping currently support values as strings +core.set_emane_model_config( + session_id, emane_id, EmaneIeee80211abgModel.name, {"unicastrate": "3"} +) + +# links nodes to emane +iface1 = iface_helper.create_iface(n1_id, 0) +core.add_link(session_id, n1_id, emane_id, iface1) +iface1 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n2_id, emane_id, iface1) diff --git a/daemon/examples/grpc/peertopeer.py b/daemon/examples/grpc/peertopeer.py new file mode 100644 index 00000000..a5695b4b --- /dev/null +++ b/daemon/examples/grpc/peertopeer.py @@ -0,0 +1,36 @@ +from core.api.grpc import client +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState + +# interface helper +iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") + +# create grpc client and connect +core = client.CoreGrpcClient() +core.connect() + +# create session and get id +response = core.create_session() +session_id = response.session_id + +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create node one +position = Position(x=100, y=100) +n1 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two +position = Position(x=300, y=100) +n2 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n2) +n2_id = response.node_id + +# links nodes together +iface1 = iface_helper.create_iface(n1_id, 0) +iface2 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n1_id, n2_id, iface1, iface2) + +# change session state +core.set_session_state(session_id, SessionState.INSTANTIATION) diff --git a/daemon/examples/grpc/switch.py b/daemon/examples/grpc/switch.py new file mode 100644 index 00000000..f79f8544 --- /dev/null +++ b/daemon/examples/grpc/switch.py @@ -0,0 +1,44 @@ +# required imports +from core.api.grpc import client +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState + +# interface helper +iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") + +# create grpc client and connect +core = client.CoreGrpcClient() +core.connect() + +# create session and get id +response = core.create_session() +session_id = response.session_id + +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create switch node +position = Position(x=200, y=200) +switch = Node(type=NodeType.SWITCH, position=position) +response = core.add_node(session_id, switch) +switch_id = response.node_id + +# create node one +position = Position(x=100, y=100) +n1 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two +position = Position(x=300, y=100) +n2 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n2) +n2_id = response.node_id + +# links nodes to switch +iface1 = iface_helper.create_iface(n1_id, 0) +core.add_link(session_id, n1_id, switch_id, iface1) +iface1 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n2_id, switch_id, iface1) + +# change session state +core.set_session_state(session_id, SessionState.INSTANTIATION) diff --git a/daemon/examples/grpc/wlan.py b/daemon/examples/grpc/wlan.py new file mode 100644 index 00000000..fa8ef9f6 --- /dev/null +++ b/daemon/examples/grpc/wlan.py @@ -0,0 +1,58 @@ +# required imports +from core.api.grpc import client +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState + +# interface helper +iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") + +# create grpc client and connect +core = client.CoreGrpcClient() +core.connect() + +# create session and get id +response = core.create_session() +session_id = response.session_id + +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create wlan node +position = Position(x=200, y=200) +wlan = Node(type=NodeType.WIRELESS_LAN, position=position) +response = core.add_node(session_id, wlan) +wlan_id = response.node_id + +# create node one +position = Position(x=100, y=100) +n1 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two +position = Position(x=300, y=100) +n2 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n2) +n2_id = response.node_id + +# configure wlan using a dict mapping currently +# support values as strings +core.set_wlan_config( + session_id, + wlan_id, + { + "range": "280", + "bandwidth": "55000000", + "delay": "6000", + "jitter": "5", + "error": "5", + }, +) + +# links nodes to wlan +iface1 = iface_helper.create_iface(n1_id, 0) +core.add_link(session_id, n1_id, wlan_id, iface1) +iface1 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n2_id, wlan_id, iface1) + +# change session state +core.set_session_state(session_id, SessionState.INSTANTIATION) diff --git a/docs/lxc.md b/daemon/examples/lxd/README.md similarity index 64% rename from docs/lxc.md rename to daemon/examples/lxd/README.md index 1ee11453..4e3952ee 100644 --- a/docs/lxc.md +++ b/daemon/examples/lxd/README.md @@ -1,14 +1,11 @@ -# LXC Support +# LXD Support -## Overview - -LXC nodes are provided by way of LXD to create nodes using predefined -images and provide file system separation. +Information on how LXD can be leveraged and included to create +nodes based on LXC containers and images to interface with +existing CORE nodes, when needed. ## Installation -### Debian Systems - ```shell sudo snap install lxd ``` @@ -41,3 +38,8 @@ newgrp lxd * LXD 3.14 * nsenter from util-linux 2.31.1 + +## Examples + +This directory provides a few small examples creating LXC nodes +using LXD and linking them to themselves or with standard CORE nodes. diff --git a/package/examples/lxd/lxd2core.py b/daemon/examples/lxd/lxd2core.py similarity index 88% rename from package/examples/lxd/lxd2core.py rename to daemon/examples/lxd/lxd2core.py index ec671b29..b41520d8 100644 --- a/package/examples/lxd/lxd2core.py +++ b/daemon/examples/lxd/lxd2core.py @@ -1,7 +1,7 @@ import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode from core.nodes.lxd import LxcNode @@ -14,10 +14,9 @@ if __name__ == "__main__": try: prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16") + options = NodeOptions(image="ubuntu") # create node one - options = LxcNode.create_options() - options.image = "ubuntu" node1 = session.add_node(LxcNode, options=options) interface1_data = prefixes.create_iface(node1) diff --git a/package/examples/lxd/lxd2lxd.py b/daemon/examples/lxd/lxd2lxd.py similarity index 88% rename from package/examples/lxd/lxd2lxd.py rename to daemon/examples/lxd/lxd2lxd.py index 7e9e6a55..3a55e2e1 100644 --- a/package/examples/lxd/lxd2lxd.py +++ b/daemon/examples/lxd/lxd2lxd.py @@ -1,7 +1,7 @@ import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.lxd import LxcNode @@ -15,10 +15,9 @@ if __name__ == "__main__": # create nodes and interfaces try: prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16") + options = NodeOptions(image="ubuntu:18.04") # create node one - options = LxcNode.create_options() - options.image = "ubuntu:18.04" node1 = session.add_node(LxcNode, options=options) interface1_data = prefixes.create_iface(node1) diff --git a/package/examples/lxd/switch.py b/daemon/examples/lxd/switch.py similarity index 91% rename from package/examples/lxd/switch.py rename to daemon/examples/lxd/switch.py index c093fd77..12767e71 100644 --- a/package/examples/lxd/switch.py +++ b/daemon/examples/lxd/switch.py @@ -1,7 +1,7 @@ import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode from core.nodes.lxd import LxcNode @@ -16,13 +16,12 @@ if __name__ == "__main__": try: prefixes = IpPrefixes(ip4_prefix="10.83.0.0/16") + options = NodeOptions(image="ubuntu") # create switch switch = session.add_node(SwitchNode) # node one - options = LxcNode.create_options() - options.image = "ubuntu" node1 = session.add_node(LxcNode, options=options) interface1_data = prefixes.create_iface(node1) diff --git a/package/examples/grpc/__init__.py b/daemon/examples/myemane/__init__.py similarity index 100% rename from package/examples/grpc/__init__.py rename to daemon/examples/myemane/__init__.py diff --git a/package/examples/myemane/examplemodel.py b/daemon/examples/myemane/examplemodel.py similarity index 69% rename from package/examples/myemane/examplemodel.py rename to daemon/examples/myemane/examplemodel.py index bd5102e4..b9e6e148 100644 --- a/package/examples/myemane/examplemodel.py +++ b/daemon/examples/myemane/examplemodel.py @@ -1,7 +1,6 @@ """ Example custom emane model. """ -from pathlib import Path from typing import Dict, List, Optional, Set from core.config import Configuration @@ -40,35 +39,17 @@ class ExampleModel(emanemodel.EmaneModel): name: str = "emane_example" mac_library: str = "rfpipemaclayer" - mac_xml: str = "rfpipemaclayer.xml" + mac_xml: str = "/usr/share/emane/manifest/rfpipemaclayer.xml" mac_defaults: Dict[str, str] = { "pcrcurveuri": "/usr/share/emane/xml/models/mac/rfpipe/rfpipepcr.xml" } - mac_config: List[Configuration] = [] + mac_config: List[Configuration] = emanemanifest.parse(mac_xml, mac_defaults) phy_library: Optional[str] = None - phy_xml: str = "emanephy.xml" + phy_xml: str = "/usr/share/emane/manifest/emanephy.xml" phy_defaults: Dict[str, str] = { "subid": "1", "propagationmodel": "2ray", "noisemode": "none", } - phy_config: List[Configuration] = [] + phy_config: List[Configuration] = emanemanifest.parse(phy_xml, phy_defaults) config_ignore: Set[str] = set() - - @classmethod - def load(cls, emane_prefix: Path) -> None: - """ - Called after being loaded within the EmaneManager. Provides configured - emane_prefix for parsing xml files. - - :param emane_prefix: configured emane prefix path - :return: nothing - """ - cls._load_platform_config(emane_prefix) - manifest_path = "share/emane/manifest" - # load mac configuration - mac_xml_path = emane_prefix / manifest_path / cls.mac_xml - cls.mac_config = emanemanifest.parse(mac_xml_path, cls.mac_defaults) - # load phy configuration - phy_xml_path = emane_prefix / manifest_path / cls.phy_xml - cls.phy_config = emanemanifest.parse(phy_xml_path, cls.phy_defaults) diff --git a/package/examples/myservices/__init__.py b/daemon/examples/myservices/__init__.py similarity index 100% rename from package/examples/myservices/__init__.py rename to daemon/examples/myservices/__init__.py diff --git a/package/examples/myservices/exampleservice.py b/daemon/examples/myservices/exampleservice.py similarity index 100% rename from package/examples/myservices/exampleservice.py rename to daemon/examples/myservices/exampleservice.py diff --git a/package/examples/python/distributed_emane.py b/daemon/examples/python/distributed_emane.py similarity index 85% rename from package/examples/python/distributed_emane.py rename to daemon/examples/python/distributed_emane.py index d19a2d87..4421283f 100644 --- a/package/examples/python/distributed_emane.py +++ b/daemon/examples/python/distributed_emane.py @@ -6,10 +6,10 @@ with the GUI. import argparse import logging -from core.emane.models.ieee80211abg import EmaneIeee80211abgModel +from core.emane.ieee80211abg import EmaneIeee80211abgModel from core.emane.nodes import EmaneNet from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode @@ -50,13 +50,11 @@ def main(args): session.set_state(EventTypes.CONFIGURATION_STATE) # create local node, switch, and remote nodes - options = CoreNode.create_options() - options.model = "mdr" + options = NodeOptions(model="mdr") + options.set_position(0, 0) node1 = session.add_node(CoreNode, options=options) - options = EmaneNet.create_options() - options.emane_model = EmaneIeee80211abgModel.name - emane_net = session.add_node(EmaneNet, options=options) - options = CoreNode.create_options() + emane_net = session.add_node(EmaneNet) + session.emane.set_model(emane_net, EmaneIeee80211abgModel) options.server = server_name node2 = session.add_node(CoreNode, options=options) diff --git a/package/examples/python/distributed_lxd.py b/daemon/examples/python/distributed_lxd.py similarity index 94% rename from package/examples/python/distributed_lxd.py rename to daemon/examples/python/distributed_lxd.py index 70af8a29..26f7caa6 100644 --- a/package/examples/python/distributed_lxd.py +++ b/daemon/examples/python/distributed_lxd.py @@ -7,7 +7,7 @@ import argparse import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.lxd import LxcNode @@ -42,8 +42,7 @@ def main(args): session.set_state(EventTypes.CONFIGURATION_STATE) # create local node, switch, and remote nodes - options = LxcNode.create_options() - options.image = "ubuntu:18.04" + options = NodeOptions(image="ubuntu:18.04") node1 = session.add_node(LxcNode, options=options) options.server = server_name node2 = session.add_node(LxcNode, options=options) diff --git a/package/examples/python/distributed_ptp.py b/daemon/examples/python/distributed_ptp.py similarity index 88% rename from package/examples/python/distributed_ptp.py rename to daemon/examples/python/distributed_ptp.py index 30dbb6bb..fe714e1d 100644 --- a/package/examples/python/distributed_ptp.py +++ b/daemon/examples/python/distributed_ptp.py @@ -7,7 +7,7 @@ import argparse import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode @@ -42,8 +42,10 @@ def main(args): session.set_state(EventTypes.CONFIGURATION_STATE) # create local node, switch, and remote nodes - node1 = session.add_node(CoreNode) - node2 = session.add_node(CoreNode, server=server_name) + options = NodeOptions() + node1 = session.add_node(CoreNode, options=options) + options.server = server_name + node2 = session.add_node(CoreNode, options=options) # create node interfaces and link interface1_data = prefixes.create_iface(node1) diff --git a/package/examples/python/distributed_switch.py b/daemon/examples/python/distributed_switch.py similarity index 96% rename from package/examples/python/distributed_switch.py rename to daemon/examples/python/distributed_switch.py index 59a0447f..35de1cad 100644 --- a/package/examples/python/distributed_switch.py +++ b/daemon/examples/python/distributed_switch.py @@ -7,7 +7,7 @@ import argparse import logging from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.nodes.base import CoreNode from core.nodes.network import SwitchNode @@ -47,7 +47,7 @@ def main(args): # create local node, switch, and remote nodes node1 = session.add_node(CoreNode) switch = session.add_node(SwitchNode) - options = CoreNode.create_options() + options = NodeOptions() options.server = server_name node2 = session.add_node(CoreNode, options=options) diff --git a/package/examples/python/emane80211.py b/daemon/examples/python/emane80211.py similarity index 52% rename from package/examples/python/emane80211.py rename to daemon/examples/python/emane80211.py index f369a718..ae4f194b 100644 --- a/package/examples/python/emane80211.py +++ b/daemon/examples/python/emane80211.py @@ -1,10 +1,10 @@ # required imports -from core.emane.models.ieee80211abg import EmaneIeee80211abgModel +from core.emane.ieee80211abg import EmaneIeee80211abgModel from core.emane.nodes import EmaneNet from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes -from core.nodes.base import CoreNode, Position +from core.nodes.base import CoreNode # ip nerator for example ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24") @@ -21,27 +21,23 @@ session.location.refscale = 150.0 session.set_state(EventTypes.CONFIGURATION_STATE) # create emane -options = EmaneNet.create_options() -options.emane_model = EmaneIeee80211abgModel.name -position = Position(x=200, y=200) -emane = session.add_node(EmaneNet, position=position, options=options) +options = NodeOptions(x=200, y=200, emane=EmaneIeee80211abgModel.name) +emane = session.add_node(EmaneNet, options=options) # create nodes -options = CoreNode.create_options() -options.model = "mdr" -position = Position(x=100, y=100) -n1 = session.add_node(CoreNode, position=position, options=options) -options = CoreNode.create_options() -options.model = "mdr" -position = Position(x=300, y=100) -n2 = session.add_node(CoreNode, position=position, options=options) +options = NodeOptions(model="mdr", x=100, y=100) +n1 = session.add_node(CoreNode, options=options) +options = NodeOptions(model="mdr", x=300, y=100) +n2 = session.add_node(CoreNode, options=options) -# configure emane settings -# configuration values are currently supported as strings -session.emane.set_config( - emane.id, - EmaneIeee80211abgModel.name, - {"unicastrate": "3", "eventservicettl": "2"}, +# configure general emane settings +config = session.emane.get_configs() +config.update({"eventservicettl": "2"}) + +# configure emane model settings +# using a dict mapping currently support values as strings +session.emane.set_model_config( + emane.id, EmaneIeee80211abgModel.name, {"unicastrate": "3"} ) # link nodes to emane diff --git a/package/examples/python/peertopeer.py b/daemon/examples/python/peertopeer.py similarity index 73% rename from package/examples/python/peertopeer.py rename to daemon/examples/python/peertopeer.py index 7883cac2..56fbe258 100644 --- a/package/examples/python/peertopeer.py +++ b/daemon/examples/python/peertopeer.py @@ -1,8 +1,8 @@ # required imports from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes -from core.nodes.base import CoreNode, Position +from core.nodes.base import CoreNode # ip nerator for example ip_prefixes = IpPrefixes(ip4_prefix="10.0.0.0/24") @@ -15,10 +15,10 @@ session = coreemu.create_session() session.set_state(EventTypes.CONFIGURATION_STATE) # create nodes -position = Position(x=100, y=100) -n1 = session.add_node(CoreNode, position=position) -position = Position(x=300, y=100) -n2 = session.add_node(CoreNode, position=position) +options = NodeOptions(x=100, y=100) +n1 = session.add_node(CoreNode, options=options) +options = NodeOptions(x=300, y=100) +n2 = session.add_node(CoreNode, options=options) # link nodes together iface1 = ip_prefixes.create_iface(n1) diff --git a/package/examples/python/switch.py b/daemon/examples/python/switch.py similarity index 70% rename from package/examples/python/switch.py rename to daemon/examples/python/switch.py index a609aa03..b7894bc3 100644 --- a/package/examples/python/switch.py +++ b/daemon/examples/python/switch.py @@ -1,8 +1,8 @@ # required imports from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes -from core.nodes.base import CoreNode, Position +from core.nodes.base import CoreNode from core.nodes.network import SwitchNode # ip nerator for example @@ -16,14 +16,14 @@ session = coreemu.create_session() session.set_state(EventTypes.CONFIGURATION_STATE) # create switch -position = Position(x=200, y=200) -switch = session.add_node(SwitchNode, position=position) +options = NodeOptions(x=200, y=200) +switch = session.add_node(SwitchNode, options=options) # create nodes -position = Position(x=100, y=100) -n1 = session.add_node(CoreNode, position=position) -position = Position(x=300, y=100) -n2 = session.add_node(CoreNode, position=position) +options = NodeOptions(x=100, y=100) +n1 = session.add_node(CoreNode, options=options) +options = NodeOptions(x=300, y=100) +n2 = session.add_node(CoreNode, options=options) # link nodes to switch iface1 = ip_prefixes.create_iface(n1) diff --git a/package/examples/python/wlan.py b/daemon/examples/python/wlan.py similarity index 69% rename from package/examples/python/wlan.py rename to daemon/examples/python/wlan.py index 512aea3e..f0dbc97a 100644 --- a/package/examples/python/wlan.py +++ b/daemon/examples/python/wlan.py @@ -1,9 +1,9 @@ # required imports from core.emulator.coreemu import CoreEmu -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.enumerations import EventTypes from core.location.mobility import BasicRangeModel -from core.nodes.base import CoreNode, Position +from core.nodes.base import CoreNode from core.nodes.network import WlanNode # ip nerator for example @@ -17,18 +17,14 @@ session = coreemu.create_session() session.set_state(EventTypes.CONFIGURATION_STATE) # create wlan -position = Position(x=200, y=200) -wlan = session.add_node(WlanNode, position=position) +options = NodeOptions(x=200, y=200) +wlan = session.add_node(WlanNode, options=options) # create nodes -options = CoreNode.create_options() -options.model = "mdr" -position = Position(x=100, y=100) -n1 = session.add_node(CoreNode, position=position, options=options) -options = CoreNode.create_options() -options.model = "mdr" -position = Position(x=300, y=100) -n2 = session.add_node(CoreNode, position=position, options=options) +options = NodeOptions(model="mdr", x=100, y=100) +n1 = session.add_node(CoreNode, options=options) +options = NodeOptions(model="mdr", x=300, y=100) +n2 = session.add_node(CoreNode, options=options) # configuring wlan session.mobility.set_model_config( diff --git a/package/examples/services/sampleFirewall b/daemon/examples/services/sampleFirewall similarity index 100% rename from package/examples/services/sampleFirewall rename to daemon/examples/services/sampleFirewall diff --git a/package/examples/services/sampleIPsec b/daemon/examples/services/sampleIPsec similarity index 100% rename from package/examples/services/sampleIPsec rename to daemon/examples/services/sampleIPsec diff --git a/package/examples/services/sampleVPNClient b/daemon/examples/services/sampleVPNClient similarity index 100% rename from package/examples/services/sampleVPNClient rename to daemon/examples/services/sampleVPNClient diff --git a/package/examples/services/sampleVPNServer b/daemon/examples/services/sampleVPNServer similarity index 100% rename from package/examples/services/sampleVPNServer rename to daemon/examples/services/sampleVPNServer diff --git a/package/examples/tdma/schedule.xml b/daemon/examples/tdma/schedule.xml similarity index 100% rename from package/examples/tdma/schedule.xml rename to daemon/examples/tdma/schedule.xml diff --git a/daemon/poetry.lock b/daemon/poetry.lock index c2aae40d..0e889ab7 100644 --- a/daemon/poetry.lock +++ b/daemon/poetry.lock @@ -1,417 +1,413 @@ [[package]] -name = "atomicwrites" -version = "1.4.1" -description = "Atomic file writes." category = "dev" +description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." +name = "appdirs" +optional = false +python-versions = "*" +version = "1.4.4" + +[[package]] +category = "dev" +description = "Atomic file writes." +marker = "sys_platform == \"win32\"" +name = "atomicwrites" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "1.4.0" [[package]] -name = "attrs" -version = "22.2.0" -description = "Classes Without Boilerplate" category = "dev" +description = "Classes Without Boilerplate" +name = "attrs" optional = false -python-versions = ">=3.6" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "20.1.0" [package.extras] -cov = ["attrs[tests]", "coverage-enable-subprocess", "coverage[toml] (>=5.3)"] -dev = ["attrs[docs,tests]"] -docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope.interface"] -tests = ["attrs[tests-no-zope]", "zope.interface"] -tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=0.971,<0.990)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] -tests_no_zope = ["cloudpickle", "hypothesis", "mypy (>=0.971,<0.990)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +dev = ["coverage (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface", "sphinx", "sphinx-rtd-theme", "pre-commit"] +docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"] +tests = ["coverage (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface"] [[package]] -name = "bcrypt" -version = "4.0.1" -description = "Modern password hashing for your software and your servers" category = "main" +description = "Modern password hashing for your software and your servers" +name = "bcrypt" optional = false python-versions = ">=3.6" +version = "3.2.0" + +[package.dependencies] +cffi = ">=1.1" +six = ">=1.4.1" [package.extras] -tests = ["pytest (>=3.2.1,!=3.3.0)"] +tests = ["pytest (>=3.2.1,<3.3.0 || >3.3.0)"] typecheck = ["mypy"] [[package]] -name = "black" -version = "22.12.0" -description = "The uncompromising code formatter." category = "dev" -optional = false -python-versions = ">=3.7" - -[package.dependencies] -click = ">=8.0.0" -mypy-extensions = ">=0.4.3" -pathspec = ">=0.9.0" -platformdirs = ">=2" -tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""} -typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""} - -[package.extras] -colorama = ["colorama (>=0.4.3)"] -d = ["aiohttp (>=3.7.4)"] -jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"] -uvloop = ["uvloop (>=0.15.2)"] - -[[package]] -name = "certifi" -version = "2022.12.7" -description = "Python package for providing Mozilla's CA Bundle." -category = "main" +description = "The uncompromising code formatter." +name = "black" optional = false python-versions = ">=3.6" +version = "19.3b0" + +[package.dependencies] +appdirs = "*" +attrs = ">=18.1.0" +click = ">=6.5" +toml = ">=0.9.4" + +[package.extras] +d = ["aiohttp (>=3.3.2)", "aiohttp-cors"] [[package]] -name = "cffi" -version = "1.15.1" -description = "Foreign Function Interface for Python calling C code." category = "main" +description = "Foreign Function Interface for Python calling C code." +name = "cffi" optional = false python-versions = "*" +version = "1.14.2" [package.dependencies] pycparser = "*" [[package]] -name = "cfgv" -version = "3.3.1" +category = "dev" description = "Validate configuration and produce human readable error messages." -category = "dev" -optional = false -python-versions = ">=3.6.1" - -[[package]] -name = "click" -version = "8.1.3" -description = "Composable command line interface toolkit" -category = "dev" -optional = false -python-versions = ">=3.7" - -[package.dependencies] -colorama = {version = "*", markers = "platform_system == \"Windows\""} - -[[package]] -name = "colorama" -version = "0.4.6" -description = "Cross-platform colored terminal text." -category = "dev" -optional = false -python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" - -[[package]] -name = "cryptography" -version = "39.0.1" -description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." -category = "main" +name = "cfgv" optional = false python-versions = ">=3.6" +version = "3.0.0" + +[[package]] +category = "dev" +description = "Composable command line interface toolkit" +name = "click" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +version = "7.1.2" + +[[package]] +category = "dev" +description = "Cross-platform colored terminal text." +marker = "sys_platform == \"win32\"" +name = "colorama" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +version = "0.4.3" + +[[package]] +category = "main" +description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." +name = "cryptography" +optional = false +python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*" +version = "3.0" [package.dependencies] -cffi = ">=1.12" +cffi = ">=1.8,<1.11.3 || >1.11.3" +six = ">=1.4.1" [package.extras] -docs = ["sphinx (>=5.3.0)", "sphinx-rtd-theme (>=1.1.1)"] -docstest = ["pyenchant (>=1.6.11)", "sphinxcontrib-spelling (>=4.0.1)", "twine (>=1.12.0)"] -pep8test = ["black", "check-manifest", "mypy", "ruff", "types-pytz", "types-requests"] -sdist = ["setuptools-rust (>=0.11.4)"] +docs = ["sphinx (>=1.6.5,<1.8.0 || >1.8.0,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1)", "sphinx-rtd-theme"] +docstest = ["doc8", "pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"] +idna = ["idna (>=2.1)"] +pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"] ssh = ["bcrypt (>=3.1.5)"] -test = ["hypothesis (>=1.11.4,!=3.79.2)", "iso8601", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-shard (>=0.1.2)", "pytest-subtests", "pytest-xdist", "pytz"] -test-randomorder = ["pytest-randomly"] -tox = ["tox"] +test = ["pytest (>=3.6.0,<3.9.0 || >3.9.0,<3.9.1 || >3.9.1,<3.9.2 || >3.9.2)", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,<3.79.2 || >3.79.2)"] [[package]] -name = "distlib" -version = "0.3.6" -description = "Distribution utilities" -category = "dev" -optional = false -python-versions = "*" - -[[package]] -name = "fabric" -version = "2.7.1" -description = "High level SSH command execution" category = "main" +description = "A backport of the dataclasses module for Python 3.6" +marker = "python_version >= \"3.6\" and python_version < \"3.7\"" +name = "dataclasses" +optional = false +python-versions = ">=3.6, <3.7" +version = "0.7" + +[[package]] +category = "dev" +description = "Distribution utilities" +name = "distlib" optional = false python-versions = "*" +version = "0.3.1" + +[[package]] +category = "main" +description = "High level SSH command execution" +name = "fabric" +optional = false +python-versions = "*" +version = "2.5.0" [package.dependencies] invoke = ">=1.3,<2.0" paramiko = ">=2.4" -pathlib2 = "*" [package.extras] pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"] testing = ["mock (>=2.0.0,<3.0)"] [[package]] -name = "filelock" -version = "3.9.0" -description = "A platform independent file lock." category = "dev" +description = "A platform independent file lock." +name = "filelock" optional = false -python-versions = ">=3.7" - -[package.extras] -docs = ["furo (>=2022.12.7)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"] -testing = ["covdefaults (>=2.2.2)", "coverage (>=7.0.1)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-timeout (>=2.1)"] +python-versions = "*" +version = "3.0.12" [[package]] -name = "flake8" -version = "3.8.2" -description = "the modular source code checker: pep8 pyflakes and co" category = "dev" +description = "the modular source code checker: pep8 pyflakes and co" +name = "flake8" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" +version = "3.8.2" [package.dependencies] mccabe = ">=0.6.0,<0.7.0" pycodestyle = ">=2.6.0a1,<2.7.0" pyflakes = ">=2.2.0,<2.3.0" +[package.dependencies.importlib-metadata] +python = "<3.8" +version = "*" + [[package]] -name = "grpcio" -version = "1.54.2" +category = "main" description = "HTTP/2-based RPC framework" -category = "main" -optional = false -python-versions = ">=3.7" - -[package.extras] -protobuf = ["grpcio-tools (>=1.54.2)"] - -[[package]] -name = "grpcio-tools" -version = "1.54.2" -description = "Protobuf code generator for gRPC" -category = "dev" -optional = false -python-versions = ">=3.7" - -[package.dependencies] -grpcio = ">=1.54.2" -protobuf = ">=4.21.6,<5.0dev" -setuptools = "*" - -[[package]] -name = "identify" -version = "2.5.18" -description = "File identification library for Python" -category = "dev" -optional = false -python-versions = ">=3.7" - -[package.extras] -license = ["ukkonen"] - -[[package]] -name = "iniconfig" -version = "2.0.0" -description = "brain-dead simple config-ini parsing" -category = "dev" -optional = false -python-versions = ">=3.7" - -[[package]] -name = "invoke" -version = "1.7.3" -description = "Pythonic task execution" -category = "main" +name = "grpcio" optional = false python-versions = "*" +version = "1.27.2" + +[package.dependencies] +six = ">=1.5.2" [[package]] -name = "isort" -version = "4.3.21" -description = "A Python utility / library to sort Python imports." category = "dev" +description = "Protobuf code generator for gRPC" +name = "grpcio-tools" +optional = false +python-versions = "*" +version = "1.27.2" + +[package.dependencies] +grpcio = ">=1.27.2" +protobuf = ">=3.5.0.post1" + +[[package]] +category = "dev" +description = "File identification library for Python" +name = "identify" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" +version = "1.4.28" + +[package.extras] +license = ["editdistance"] + +[[package]] +category = "dev" +description = "Read metadata from Python packages" +marker = "python_version < \"3.8\"" +name = "importlib-metadata" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" +version = "1.7.0" + +[package.dependencies] +zipp = ">=0.5" + +[package.extras] +docs = ["sphinx", "rst.linker"] +testing = ["packaging", "pep517", "importlib-resources (>=1.3)"] + +[[package]] +category = "dev" +description = "Read resources from Python packages" +marker = "python_version < \"3.7\"" +name = "importlib-resources" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" +version = "3.0.0" + +[package.dependencies] +[package.dependencies.zipp] +python = "<3.8" +version = ">=0.4" + +[package.extras] +docs = ["sphinx", "rst.linker", "jaraco.packaging"] + +[[package]] +category = "main" +description = "Pythonic task execution" +name = "invoke" +optional = false +python-versions = "*" +version = "1.4.1" + +[[package]] +category = "dev" +description = "A Python utility / library to sort Python imports." +name = "isort" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "4.3.21" [package.extras] pipfile = ["pipreqs", "requirementslib"] pyproject = ["toml"] -requirements = ["pip-api", "pipreqs"] +requirements = ["pipreqs", "pip-api"] xdg_home = ["appdirs (>=1.4.0)"] [[package]] -name = "lxml" -version = "4.9.1" -description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API." category = "main" +description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API." +name = "lxml" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, != 3.4.*" +version = "4.5.1" [package.extras] cssselect = ["cssselect (>=0.7)"] html5 = ["html5lib"] -htmlsoup = ["BeautifulSoup4"] +htmlsoup = ["beautifulsoup4"] source = ["Cython (>=0.29.7)"] [[package]] -name = "Mako" -version = "1.2.3" -description = "A super-fast templating language that borrows the best ideas from the existing templating languages." category = "main" +description = "A super-fast templating language that borrows the best ideas from the existing templating languages." +name = "mako" optional = false -python-versions = ">=3.7" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "1.1.3" [package.dependencies] MarkupSafe = ">=0.9.2" [package.extras] -babel = ["Babel"] +babel = ["babel"] lingua = ["lingua"] -testing = ["pytest"] [[package]] -name = "MarkupSafe" -version = "2.1.2" -description = "Safely add untrusted strings to HTML/XML markup." category = "main" +description = "Safely add untrusted strings to HTML/XML markup." +name = "markupsafe" optional = false -python-versions = ">=3.7" +python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*" +version = "1.1.1" [[package]] -name = "mccabe" -version = "0.6.1" -description = "McCabe checker, plugin for flake8" category = "dev" +description = "McCabe checker, plugin for flake8" +name = "mccabe" optional = false python-versions = "*" +version = "0.6.1" [[package]] -name = "mock" -version = "4.0.2" -description = "Rolling backport of unittest.mock for all Pythons" category = "dev" +description = "Rolling backport of unittest.mock for all Pythons" +name = "mock" optional = false python-versions = ">=3.6" +version = "4.0.2" [package.extras] -build = ["blurb", "twine", "wheel"] +build = ["twine", "wheel", "blurb"] docs = ["sphinx"] test = ["pytest", "pytest-cov"] [[package]] -name = "mypy-extensions" -version = "1.0.0" -description = "Type system extensions for programs checked with the mypy type checker." category = "dev" +description = "More routines for operating on iterables, beyond itertools" +name = "more-itertools" optional = false python-versions = ">=3.5" +version = "8.4.0" [[package]] -name = "netaddr" -version = "0.7.19" +category = "main" description = "A network address manipulation library for Python" -category = "main" +name = "netaddr" optional = false python-versions = "*" +version = "0.7.19" [[package]] -name = "nodeenv" -version = "1.7.0" +category = "dev" description = "Node.js virtual environment builder" -category = "dev" -optional = false -python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*" - -[package.dependencies] -setuptools = "*" - -[[package]] -name = "packaging" -version = "23.0" -description = "Core utilities for Python packages" -category = "dev" -optional = false -python-versions = ">=3.7" - -[[package]] -name = "paramiko" -version = "3.0.0" -description = "SSH2 protocol library" -category = "main" -optional = false -python-versions = ">=3.6" - -[package.dependencies] -bcrypt = ">=3.2" -cryptography = ">=3.3" -pynacl = ">=1.5" - -[package.extras] -all = ["gssapi (>=1.4.1)", "invoke (>=2.0)", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8)"] -gssapi = ["gssapi (>=1.4.1)", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8)"] -invoke = ["invoke (>=2.0)"] - -[[package]] -name = "pathlib2" -version = "2.3.7.post1" -description = "Object-oriented filesystem paths" -category = "main" +name = "nodeenv" optional = false python-versions = "*" +version = "1.4.0" + +[[package]] +category = "dev" +description = "Core utilities for Python packages" +name = "packaging" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "20.4" [package.dependencies] +pyparsing = ">=2.0.2" six = "*" [[package]] -name = "pathspec" -version = "0.11.0" -description = "Utility library for gitignore style pattern matching of file paths." -category = "dev" -optional = false -python-versions = ">=3.7" - -[[package]] -name = "Pillow" -version = "9.4.0" -description = "Python Imaging Library (Fork)" category = "main" +description = "SSH2 protocol library" +name = "paramiko" optional = false -python-versions = ">=3.7" +python-versions = "*" +version = "2.7.1" + +[package.dependencies] +bcrypt = ">=3.1.3" +cryptography = ">=2.5" +pynacl = ">=1.0.1" [package.extras] -docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"] -tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"] +all = ["pyasn1 (>=0.1.7)", "pynacl (>=1.0.1)", "bcrypt (>=3.1.3)", "invoke (>=1.3)", "gssapi (>=1.4.1)", "pywin32 (>=2.1.8)"] +ed25519 = ["pynacl (>=1.0.1)", "bcrypt (>=3.1.3)"] +gssapi = ["pyasn1 (>=0.1.7)", "gssapi (>=1.4.1)", "pywin32 (>=2.1.8)"] +invoke = ["invoke (>=1.3)"] + +[[package]] +category = "main" +description = "Python Imaging Library (Fork)" +name = "pillow" +optional = false +python-versions = ">=3.5" +version = "7.1.2" [[package]] -name = "platformdirs" -version = "3.0.0" -description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." category = "dev" -optional = false -python-versions = ">=3.7" - -[package.extras] -docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.22,!=1.23.4)"] -test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"] - -[[package]] -name = "pluggy" -version = "1.0.0" description = "plugin and hook calling mechanisms for python" -category = "dev" +name = "pluggy" optional = false -python-versions = ">=3.6" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "0.13.1" + +[package.dependencies] +[package.dependencies.importlib-metadata] +python = "<3.8" +version = ">=0.12" [package.extras] dev = ["pre-commit", "tox"] -testing = ["pytest", "pytest-benchmark"] [[package]] -name = "pre-commit" -version = "2.1.1" -description = "A framework for managing and maintaining multi-language pre-commit hooks." category = "dev" +description = "A framework for managing and maintaining multi-language pre-commit hooks." +name = "pre-commit" optional = false python-versions = ">=3.6" +version = "2.1.1" [package.dependencies] cfgv = ">=2.0.0" @@ -421,573 +417,478 @@ pyyaml = ">=5.1" toml = "*" virtualenv = ">=15.2" +[package.dependencies.importlib-metadata] +python = "<3.8" +version = "*" + +[package.dependencies.importlib-resources] +python = "<3.7" +version = "*" + [[package]] +category = "main" +description = "Protocol Buffers" name = "protobuf" -version = "4.21.9" -description = "" -category = "main" optional = false -python-versions = ">=3.7" +python-versions = "*" +version = "3.12.2" + +[package.dependencies] +setuptools = "*" +six = ">=1.9" [[package]] -name = "py" -version = "1.11.0" +category = "dev" description = "library with cross-python path, ini-parsing, io, code, log facilities" -category = "dev" +name = "py" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "1.9.0" [[package]] -name = "pycodestyle" -version = "2.6.0" +category = "dev" description = "Python style guide checker" -category = "dev" +name = "pycodestyle" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "2.6.0" [[package]] -name = "pycparser" -version = "2.21" +category = "main" description = "C parser in Python" -category = "main" +name = "pycparser" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "2.20" [[package]] -name = "pyflakes" -version = "2.2.0" -description = "passive checker of Python programs" category = "dev" +description = "passive checker of Python programs" +name = "pyflakes" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "2.2.0" [[package]] -name = "PyNaCl" -version = "1.5.0" -description = "Python binding to the Networking and Cryptography (NaCl) library" category = "main" +description = "Python binding to the Networking and Cryptography (NaCl) library" +name = "pynacl" optional = false -python-versions = ">=3.6" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +version = "1.4.0" [package.dependencies] cffi = ">=1.4.1" +six = "*" [package.extras] -docs = ["sphinx (>=1.6.5)", "sphinx_rtd_theme"] -tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"] +docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"] +tests = ["pytest (>=3.2.1,<3.3.0 || >3.3.0)", "hypothesis (>=3.27.0)"] [[package]] -name = "pyproj" -version = "3.3.1" -description = "Python interface to PROJ (cartographic projections and coordinate transformations library)" -category = "main" -optional = false -python-versions = ">=3.8" - -[package.dependencies] -certifi = "*" - -[[package]] -name = "pytest" -version = "6.2.5" -description = "pytest: simple powerful testing with Python" category = "dev" +description = "Python parsing module" +name = "pyparsing" optional = false -python-versions = ">=3.6" +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" +version = "2.4.7" + +[[package]] +category = "main" +description = "Python interface to PROJ (cartographic projections and coordinate transformations library)" +name = "pyproj" +optional = false +python-versions = ">=3.5" +version = "2.6.1.post1" + +[[package]] +category = "dev" +description = "pytest: simple powerful testing with Python" +name = "pytest" +optional = false +python-versions = ">=3.5" +version = "5.4.3" [package.dependencies] -atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""} -attrs = ">=19.2.0" -colorama = {version = "*", markers = "sys_platform == \"win32\""} -iniconfig = "*" +atomicwrites = ">=1.0" +attrs = ">=17.4.0" +colorama = "*" +more-itertools = ">=4.0.0" packaging = "*" -pluggy = ">=0.12,<2.0" -py = ">=1.8.2" -toml = "*" +pluggy = ">=0.12,<1.0" +py = ">=1.5.0" +wcwidth = "*" + +[package.dependencies.importlib-metadata] +python = "<3.8" +version = ">=0.12" [package.extras] +checkqa-mypy = ["mypy (v0.761)"] testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"] [[package]] -name = "PyYAML" -version = "6.0.1" +category = "main" description = "YAML parser and emitter for Python" +name = "pyyaml" +optional = false +python-versions = "*" +version = "5.3.1" + +[[package]] category = "main" -optional = false -python-versions = ">=3.6" - -[[package]] -name = "setuptools" -version = "67.4.0" -description = "Easily download, build, install, upgrade, and uninstall Python packages" -category = "dev" -optional = false -python-versions = ">=3.7" - -[package.extras] -docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"] -testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"] -testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"] - -[[package]] -name = "six" -version = "1.16.0" description = "Python 2 and 3 compatibility utilities" -category = "main" +name = "six" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" +version = "1.15.0" [[package]] -name = "toml" -version = "0.10.2" +category = "dev" description = "Python Library for Tom's Obvious, Minimal Language" -category = "dev" +name = "toml" optional = false -python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" +python-versions = "*" +version = "0.10.1" [[package]] -name = "tomli" -version = "2.0.1" -description = "A lil' TOML parser" category = "dev" -optional = false -python-versions = ">=3.7" - -[[package]] -name = "typing-extensions" -version = "4.5.0" -description = "Backported and Experimental Type Hints for Python 3.7+" -category = "dev" -optional = false -python-versions = ">=3.7" - -[[package]] -name = "virtualenv" -version = "20.19.0" description = "Virtual Python Environment builder" -category = "dev" +name = "virtualenv" optional = false -python-versions = ">=3.7" +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" +version = "20.0.31" [package.dependencies] -distlib = ">=0.3.6,<1" -filelock = ">=3.4.1,<4" -platformdirs = ">=2.4,<4" +appdirs = ">=1.4.3,<2" +distlib = ">=0.3.1,<1" +filelock = ">=3.0.0,<4" +six = ">=1.9.0,<2" + +[package.dependencies.importlib-metadata] +python = "<3.8" +version = ">=0.12,<2" + +[package.dependencies.importlib-resources] +python = "<3.7" +version = ">=1.0" [package.extras] -docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=22.12)"] -test = ["covdefaults (>=2.2.2)", "coverage (>=7.1)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23)", "pytest (>=7.2.1)", "pytest-env (>=0.8.1)", "pytest-freezegun (>=0.4.2)", "pytest-mock (>=3.10)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)"] +docs = ["proselint (>=0.10.2)", "sphinx (>=3)", "sphinx-argparse (>=0.2.5)", "sphinx-rtd-theme (>=0.4.3)", "towncrier (>=19.9.0rc1)"] +testing = ["coverage (>=5)", "coverage-enable-subprocess (>=1)", "flaky (>=3)", "pytest (>=4)", "pytest-env (>=0.6.2)", "pytest-freezegun (>=0.4.1)", "pytest-mock (>=2)", "pytest-randomly (>=1)", "pytest-timeout (>=1)", "pytest-xdist (>=1.31.0)", "packaging (>=20.0)", "xonsh (>=0.9.16)"] + +[[package]] +category = "dev" +description = "Measures the displayed width of unicode strings in a terminal" +name = "wcwidth" +optional = false +python-versions = "*" +version = "0.2.5" + +[[package]] +category = "dev" +description = "Backport of pathlib-compatible object wrapper for zip files" +marker = "python_version < \"3.8\"" +name = "zipp" +optional = false +python-versions = ">=3.6" +version = "3.1.0" + +[package.extras] +docs = ["sphinx", "jaraco.packaging (>=3.2)", "rst.linker (>=1.9)"] +testing = ["jaraco.itertools", "func-timeout"] [metadata] -lock-version = "1.1" -python-versions = "^3.9" -content-hash = "10902a50368c4381aec5a3e72a221a4c4225ae1be17ee38600f89aaee4a49c1f" +content-hash = "cd09344b4f0183ada890fa9ac205e6d6410d94863e9067b5d2957274cebf374b" +python-versions = "^3.6" [metadata.files] +appdirs = [ + {file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"}, + {file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"}, +] atomicwrites = [ - {file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"}, + {file = "atomicwrites-1.4.0-py2.py3-none-any.whl", hash = "sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197"}, + {file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"}, ] attrs = [ - {file = "attrs-22.2.0-py3-none-any.whl", hash = "sha256:29e95c7f6778868dbd49170f98f8818f78f3dc5e0e37c0b1f474e3561b240836"}, - {file = "attrs-22.2.0.tar.gz", hash = "sha256:c9227bfc2f01993c03f68db37d1d15c9690188323c067c641f1a35ca58185f99"}, + {file = "attrs-20.1.0-py2.py3-none-any.whl", hash = "sha256:2867b7b9f8326499ab5b0e2d12801fa5c98842d2cbd22b35112ae04bf85b4dff"}, + {file = "attrs-20.1.0.tar.gz", hash = "sha256:0ef97238856430dcf9228e07f316aefc17e8939fc8507e18c6501b761ef1a42a"}, ] bcrypt = [ - {file = "bcrypt-4.0.1-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:b1023030aec778185a6c16cf70f359cbb6e0c289fd564a7cfa29e727a1c38f8f"}, - {file = "bcrypt-4.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:08d2947c490093a11416df18043c27abe3921558d2c03e2076ccb28a116cb6d0"}, - {file = "bcrypt-4.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0eaa47d4661c326bfc9d08d16debbc4edf78778e6aaba29c1bc7ce67214d4410"}, - {file = "bcrypt-4.0.1-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ae88eca3024bb34bb3430f964beab71226e761f51b912de5133470b649d82344"}, - {file = "bcrypt-4.0.1-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:a522427293d77e1c29e303fc282e2d71864579527a04ddcfda6d4f8396c6c36a"}, - {file = "bcrypt-4.0.1-cp36-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:fbdaec13c5105f0c4e5c52614d04f0bca5f5af007910daa8b6b12095edaa67b3"}, - {file = "bcrypt-4.0.1-cp36-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:ca3204d00d3cb2dfed07f2d74a25f12fc12f73e606fcaa6975d1f7ae69cacbb2"}, - {file = "bcrypt-4.0.1-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:089098effa1bc35dc055366740a067a2fc76987e8ec75349eb9484061c54f535"}, - {file = "bcrypt-4.0.1-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:e9a51bbfe7e9802b5f3508687758b564069ba937748ad7b9e890086290d2f79e"}, - {file = "bcrypt-4.0.1-cp36-abi3-win32.whl", hash = "sha256:2caffdae059e06ac23fce178d31b4a702f2a3264c20bfb5ff541b338194d8fab"}, - {file = "bcrypt-4.0.1-cp36-abi3-win_amd64.whl", hash = "sha256:8a68f4341daf7522fe8d73874de8906f3a339048ba406be6ddc1b3ccb16fc0d9"}, - {file = "bcrypt-4.0.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf4fa8b2ca74381bb5442c089350f09a3f17797829d958fad058d6e44d9eb83c"}, - {file = "bcrypt-4.0.1-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:67a97e1c405b24f19d08890e7ae0c4f7ce1e56a712a016746c8b2d7732d65d4b"}, - {file = "bcrypt-4.0.1-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b3b85202d95dd568efcb35b53936c5e3b3600c7cdcc6115ba461df3a8e89f38d"}, - {file = "bcrypt-4.0.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cbb03eec97496166b704ed663a53680ab57c5084b2fc98ef23291987b525cb7d"}, - {file = "bcrypt-4.0.1-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:5ad4d32a28b80c5fa6671ccfb43676e8c1cc232887759d1cd7b6f56ea4355215"}, - {file = "bcrypt-4.0.1-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b57adba8a1444faf784394de3436233728a1ecaeb6e07e8c22c8848f179b893c"}, - {file = "bcrypt-4.0.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:705b2cea8a9ed3d55b4491887ceadb0106acf7c6387699fca771af56b1cdeeda"}, - {file = "bcrypt-4.0.1-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:2b3ac11cf45161628f1f3733263e63194f22664bf4d0c0f3ab34099c02134665"}, - {file = "bcrypt-4.0.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:3100851841186c25f127731b9fa11909ab7b1df6fc4b9f8353f4f1fd952fbf71"}, - {file = "bcrypt-4.0.1.tar.gz", hash = "sha256:27d375903ac8261cfe4047f6709d16f7d18d39b1ec92aaf72af989552a650ebd"}, + {file = "bcrypt-3.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:c95d4cbebffafcdd28bd28bb4e25b31c50f6da605c81ffd9ad8a3d1b2ab7b1b6"}, + {file = "bcrypt-3.2.0-cp36-abi3-manylinux1_x86_64.whl", hash = "sha256:63d4e3ff96188e5898779b6057878fecf3f11cfe6ec3b313ea09955d587ec7a7"}, + {file = "bcrypt-3.2.0-cp36-abi3-manylinux2010_x86_64.whl", hash = "sha256:cd1ea2ff3038509ea95f687256c46b79f5fc382ad0aa3664d200047546d511d1"}, + {file = "bcrypt-3.2.0-cp36-abi3-manylinux2014_aarch64.whl", hash = "sha256:cdcdcb3972027f83fe24a48b1e90ea4b584d35f1cc279d76de6fc4b13376239d"}, + {file = "bcrypt-3.2.0-cp36-abi3-win32.whl", hash = "sha256:a67fb841b35c28a59cebed05fbd3e80eea26e6d75851f0574a9273c80f3e9b55"}, + {file = "bcrypt-3.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:81fec756feff5b6818ea7ab031205e1d323d8943d237303baca2c5f9c7846f34"}, + {file = "bcrypt-3.2.0.tar.gz", hash = "sha256:5b93c1726e50a93a033c36e5ca7fdcd29a5c7395af50a6892f5d9e7c6cfbfb29"}, ] black = [ - {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"}, - {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"}, - {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"}, - {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"}, - {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"}, - {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"}, - {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"}, - {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"}, - {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"}, - {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"}, - {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"}, - {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"}, -] -certifi = [ - {file = "certifi-2022.12.7-py3-none-any.whl", hash = "sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18"}, - {file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"}, + {file = "black-19.3b0-py36-none-any.whl", hash = "sha256:09a9dcb7c46ed496a9850b76e4e825d6049ecd38b611f1224857a79bd985a8cf"}, + {file = "black-19.3b0.tar.gz", hash = "sha256:68950ffd4d9169716bcb8719a56c07a2f4485354fec061cdd5910aa07369731c"}, ] cffi = [ - {file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"}, - {file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"}, - {file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"}, - {file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"}, - {file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"}, - {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"}, - {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"}, - {file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"}, - {file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"}, - {file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"}, - {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"}, - {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"}, - {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"}, - {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"}, - {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"}, - {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"}, - {file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"}, - {file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"}, - {file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"}, - {file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"}, - {file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"}, - {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"}, - {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"}, - {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"}, - {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"}, - {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"}, - {file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"}, - {file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"}, - {file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"}, - {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"}, - {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"}, - {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"}, - {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"}, - {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"}, - {file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"}, - {file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"}, - {file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"}, - {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"}, - {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"}, - {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"}, - {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"}, - {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"}, - {file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"}, - {file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"}, - {file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"}, - {file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"}, - {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"}, - {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"}, - {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"}, - {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"}, - {file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"}, - {file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"}, - {file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"}, - {file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"}, - {file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"}, - {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"}, - {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"}, - {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"}, - {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"}, - {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"}, - {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"}, - {file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"}, - {file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"}, - {file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"}, + {file = "cffi-1.14.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:da9d3c506f43e220336433dffe643fbfa40096d408cb9b7f2477892f369d5f82"}, + {file = "cffi-1.14.2-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:23e44937d7695c27c66a54d793dd4b45889a81b35c0751ba91040fe825ec59c4"}, + {file = "cffi-1.14.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:0da50dcbccd7cb7e6c741ab7912b2eff48e85af217d72b57f80ebc616257125e"}, + {file = "cffi-1.14.2-cp27-cp27m-win32.whl", hash = "sha256:76ada88d62eb24de7051c5157a1a78fd853cca9b91c0713c2e973e4196271d0c"}, + {file = "cffi-1.14.2-cp27-cp27m-win_amd64.whl", hash = "sha256:15a5f59a4808f82d8ec7364cbace851df591c2d43bc76bcbe5c4543a7ddd1bf1"}, + {file = "cffi-1.14.2-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:e4082d832e36e7f9b2278bc774886ca8207346b99f278e54c9de4834f17232f7"}, + {file = "cffi-1.14.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:57214fa5430399dffd54f4be37b56fe22cedb2b98862550d43cc085fb698dc2c"}, + {file = "cffi-1.14.2-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:6843db0343e12e3f52cc58430ad559d850a53684f5b352540ca3f1bc56df0731"}, + {file = "cffi-1.14.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:577791f948d34d569acb2d1add5831731c59d5a0c50a6d9f629ae1cefd9ca4a0"}, + {file = "cffi-1.14.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:8662aabfeab00cea149a3d1c2999b0731e70c6b5bac596d95d13f643e76d3d4e"}, + {file = "cffi-1.14.2-cp35-cp35m-win32.whl", hash = "sha256:837398c2ec00228679513802e3744d1e8e3cb1204aa6ad408b6aff081e99a487"}, + {file = "cffi-1.14.2-cp35-cp35m-win_amd64.whl", hash = "sha256:bf44a9a0141a082e89c90e8d785b212a872db793a0080c20f6ae6e2a0ebf82ad"}, + {file = "cffi-1.14.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:29c4688ace466a365b85a51dcc5e3c853c1d283f293dfcc12f7a77e498f160d2"}, + {file = "cffi-1.14.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:99cc66b33c418cd579c0f03b77b94263c305c389cb0c6972dac420f24b3bf123"}, + {file = "cffi-1.14.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:65867d63f0fd1b500fa343d7798fa64e9e681b594e0a07dc934c13e76ee28fb1"}, + {file = "cffi-1.14.2-cp36-cp36m-win32.whl", hash = "sha256:f5033952def24172e60493b68717792e3aebb387a8d186c43c020d9363ee7281"}, + {file = "cffi-1.14.2-cp36-cp36m-win_amd64.whl", hash = "sha256:7057613efefd36cacabbdbcef010e0a9c20a88fc07eb3e616019ea1692fa5df4"}, + {file = "cffi-1.14.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6539314d84c4d36f28d73adc1b45e9f4ee2a89cdc7e5d2b0a6dbacba31906798"}, + {file = "cffi-1.14.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:672b539db20fef6b03d6f7a14b5825d57c98e4026401fce838849f8de73fe4d4"}, + {file = "cffi-1.14.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:95e9094162fa712f18b4f60896e34b621df99147c2cee216cfa8f022294e8e9f"}, + {file = "cffi-1.14.2-cp37-cp37m-win32.whl", hash = "sha256:b9aa9d8818c2e917fa2c105ad538e222a5bce59777133840b93134022a7ce650"}, + {file = "cffi-1.14.2-cp37-cp37m-win_amd64.whl", hash = "sha256:e4b9b7af398c32e408c00eb4e0d33ced2f9121fd9fb978e6c1b57edd014a7d15"}, + {file = "cffi-1.14.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e613514a82539fc48291d01933951a13ae93b6b444a88782480be32245ed4afa"}, + {file = "cffi-1.14.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:9b219511d8b64d3fa14261963933be34028ea0e57455baf6781fe399c2c3206c"}, + {file = "cffi-1.14.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:c0b48b98d79cf795b0916c57bebbc6d16bb43b9fc9b8c9f57f4cf05881904c75"}, + {file = "cffi-1.14.2-cp38-cp38-win32.whl", hash = "sha256:15419020b0e812b40d96ec9d369b2bc8109cc3295eac6e013d3261343580cc7e"}, + {file = "cffi-1.14.2-cp38-cp38-win_amd64.whl", hash = "sha256:12a453e03124069b6896107ee133ae3ab04c624bb10683e1ed1c1663df17c13c"}, + {file = "cffi-1.14.2.tar.gz", hash = "sha256:ae8f34d50af2c2154035984b8b5fc5d9ed63f32fe615646ab435b05b132ca91b"}, ] cfgv = [ - {file = "cfgv-3.3.1-py2.py3-none-any.whl", hash = "sha256:c6a0883f3917a037485059700b9e75da2464e6c27051014ad85ba6aaa5884426"}, - {file = "cfgv-3.3.1.tar.gz", hash = "sha256:f5a830efb9ce7a445376bb66ec94c638a9787422f96264c98edc6bdeed8ab736"}, + {file = "cfgv-3.0.0-py2.py3-none-any.whl", hash = "sha256:f22b426ed59cd2ab2b54ff96608d846c33dfb8766a67f0b4a6ce130ce244414f"}, + {file = "cfgv-3.0.0.tar.gz", hash = "sha256:04b093b14ddf9fd4d17c53ebfd55582d27b76ed30050193c14e560770c5360eb"}, ] click = [ - {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"}, - {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"}, + {file = "click-7.1.2-py2.py3-none-any.whl", hash = "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"}, + {file = "click-7.1.2.tar.gz", hash = "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a"}, ] colorama = [ - {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, - {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, + {file = "colorama-0.4.3-py2.py3-none-any.whl", hash = "sha256:7d73d2a99753107a36ac6b455ee49046802e59d9d076ef8e47b61499fa29afff"}, + {file = "colorama-0.4.3.tar.gz", hash = "sha256:e96da0d330793e2cb9485e9ddfd918d456036c7149416295932478192f4436a1"}, ] cryptography = [ - {file = "cryptography-39.0.1-cp36-abi3-macosx_10_12_universal2.whl", hash = "sha256:6687ef6d0a6497e2b58e7c5b852b53f62142cfa7cd1555795758934da363a965"}, - {file = "cryptography-39.0.1-cp36-abi3-macosx_10_12_x86_64.whl", hash = "sha256:706843b48f9a3f9b9911979761c91541e3d90db1ca905fd63fee540a217698bc"}, - {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:5d2d8b87a490bfcd407ed9d49093793d0f75198a35e6eb1a923ce1ee86c62b41"}, - {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83e17b26de248c33f3acffb922748151d71827d6021d98c70e6c1a25ddd78505"}, - {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e124352fd3db36a9d4a21c1aa27fd5d051e621845cb87fb851c08f4f75ce8be6"}, - {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:5aa67414fcdfa22cf052e640cb5ddc461924a045cacf325cd164e65312d99502"}, - {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:35f7c7d015d474f4011e859e93e789c87d21f6f4880ebdc29896a60403328f1f"}, - {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f24077a3b5298a5a06a8e0536e3ea9ec60e4c7ac486755e5fb6e6ea9b3500106"}, - {file = "cryptography-39.0.1-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:f0c64d1bd842ca2633e74a1a28033d139368ad959872533b1bab8c80e8240a0c"}, - {file = "cryptography-39.0.1-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:0f8da300b5c8af9f98111ffd512910bc792b4c77392a9523624680f7956a99d4"}, - {file = "cryptography-39.0.1-cp36-abi3-win32.whl", hash = "sha256:fe913f20024eb2cb2f323e42a64bdf2911bb9738a15dba7d3cce48151034e3a8"}, - {file = "cryptography-39.0.1-cp36-abi3-win_amd64.whl", hash = "sha256:ced4e447ae29ca194449a3f1ce132ded8fcab06971ef5f618605aacaa612beac"}, - {file = "cryptography-39.0.1-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:807ce09d4434881ca3a7594733669bd834f5b2c6d5c7e36f8c00f691887042ad"}, - {file = "cryptography-39.0.1-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c5caeb8188c24888c90b5108a441c106f7faa4c4c075a2bcae438c6e8ca73cef"}, - {file = "cryptography-39.0.1-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:4789d1e3e257965e960232345002262ede4d094d1a19f4d3b52e48d4d8f3b885"}, - {file = "cryptography-39.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:96f1157a7c08b5b189b16b47bc9db2332269d6680a196341bf30046330d15388"}, - {file = "cryptography-39.0.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:e422abdec8b5fa8462aa016786680720d78bdce7a30c652b7fadf83a4ba35336"}, - {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:b0afd054cd42f3d213bf82c629efb1ee5f22eba35bf0eec88ea9ea7304f511a2"}, - {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:6f8ba7f0328b79f08bdacc3e4e66fb4d7aab0c3584e0bd41328dce5262e26b2e"}, - {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:ef8b72fa70b348724ff1218267e7f7375b8de4e8194d1636ee60510aae104cd0"}, - {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:aec5a6c9864be7df2240c382740fcf3b96928c46604eaa7f3091f58b878c0bb6"}, - {file = "cryptography-39.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:fdd188c8a6ef8769f148f88f859884507b954cc64db6b52f66ef199bb9ad660a"}, - {file = "cryptography-39.0.1.tar.gz", hash = "sha256:d1f6198ee6d9148405e49887803907fe8962a23e6c6f83ea7d98f1c0de375695"}, + {file = "cryptography-3.0-cp27-cp27m-macosx_10_10_x86_64.whl", hash = "sha256:ab49edd5bea8d8b39a44b3db618e4783ef84c19c8b47286bf05dfdb3efb01c83"}, + {file = "cryptography-3.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:124af7255ffc8e964d9ff26971b3a6153e1a8a220b9a685dc407976ecb27a06a"}, + {file = "cryptography-3.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:51e40123083d2f946794f9fe4adeeee2922b581fa3602128ce85ff813d85b81f"}, + {file = "cryptography-3.0-cp27-cp27m-win32.whl", hash = "sha256:dea0ba7fe6f9461d244679efa968d215ea1f989b9c1957d7f10c21e5c7c09ad6"}, + {file = "cryptography-3.0-cp27-cp27m-win_amd64.whl", hash = "sha256:8ecf9400d0893836ff41b6f977a33972145a855b6efeb605b49ee273c5e6469f"}, + {file = "cryptography-3.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:0c608ff4d4adad9e39b5057de43657515c7da1ccb1807c3a27d4cf31fc923b4b"}, + {file = "cryptography-3.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:bec7568c6970b865f2bcebbe84d547c52bb2abadf74cefce396ba07571109c67"}, + {file = "cryptography-3.0-cp35-abi3-macosx_10_10_x86_64.whl", hash = "sha256:0cbfed8ea74631fe4de00630f4bb592dad564d57f73150d6f6796a24e76c76cd"}, + {file = "cryptography-3.0-cp35-abi3-manylinux1_x86_64.whl", hash = "sha256:a09fd9c1cca9a46b6ad4bea0a1f86ab1de3c0c932364dbcf9a6c2a5eeb44fa77"}, + {file = "cryptography-3.0-cp35-abi3-manylinux2010_x86_64.whl", hash = "sha256:ce82cc06588e5cbc2a7df3c8a9c778f2cb722f56835a23a68b5a7264726bb00c"}, + {file = "cryptography-3.0-cp35-cp35m-win32.whl", hash = "sha256:9367d00e14dee8d02134c6c9524bb4bd39d4c162456343d07191e2a0b5ec8b3b"}, + {file = "cryptography-3.0-cp35-cp35m-win_amd64.whl", hash = "sha256:384d7c681b1ab904fff3400a6909261cae1d0939cc483a68bdedab282fb89a07"}, + {file = "cryptography-3.0-cp36-cp36m-win32.whl", hash = "sha256:4d355f2aee4a29063c10164b032d9fa8a82e2c30768737a2fd56d256146ad559"}, + {file = "cryptography-3.0-cp36-cp36m-win_amd64.whl", hash = "sha256:45741f5499150593178fc98d2c1a9c6722df88b99c821ad6ae298eff0ba1ae71"}, + {file = "cryptography-3.0-cp37-cp37m-win32.whl", hash = "sha256:8ecef21ac982aa78309bb6f092d1677812927e8b5ef204a10c326fc29f1367e2"}, + {file = "cryptography-3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:4b9303507254ccb1181d1803a2080a798910ba89b1a3c9f53639885c90f7a756"}, + {file = "cryptography-3.0-cp38-cp38-win32.whl", hash = "sha256:8713ddb888119b0d2a1462357d5946b8911be01ddbf31451e1d07eaa5077a261"}, + {file = "cryptography-3.0-cp38-cp38-win_amd64.whl", hash = "sha256:bea0b0468f89cdea625bb3f692cd7a4222d80a6bdafd6fb923963f2b9da0e15f"}, + {file = "cryptography-3.0.tar.gz", hash = "sha256:8e924dbc025206e97756e8903039662aa58aa9ba357d8e1d8fc29e3092322053"}, +] +dataclasses = [ + {file = "dataclasses-0.7-py3-none-any.whl", hash = "sha256:3459118f7ede7c8bea0fe795bff7c6c2ce287d01dd226202f7c9ebc0610a7836"}, + {file = "dataclasses-0.7.tar.gz", hash = "sha256:494a6dcae3b8bcf80848eea2ef64c0cc5cd307ffc263e17cdf42f3e5420808e6"}, ] distlib = [ - {file = "distlib-0.3.6-py2.py3-none-any.whl", hash = "sha256:f35c4b692542ca110de7ef0bea44d73981caeb34ca0b9b6b2e6d7790dda8f80e"}, - {file = "distlib-0.3.6.tar.gz", hash = "sha256:14bad2d9b04d3a36127ac97f30b12a19268f211063d8f8ee4f47108896e11b46"}, + {file = "distlib-0.3.1-py2.py3-none-any.whl", hash = "sha256:8c09de2c67b3e7deef7184574fc060ab8a793e7adbb183d942c389c8b13c52fb"}, + {file = "distlib-0.3.1.zip", hash = "sha256:edf6116872c863e1aa9d5bb7cb5e05a022c519a4594dc703843343a9ddd9bff1"}, ] fabric = [ - {file = "fabric-2.7.1-py2.py3-none-any.whl", hash = "sha256:7610362318ef2d391cc65d4befb684393975d889ed5720f23499394ec0e136fa"}, - {file = "fabric-2.7.1.tar.gz", hash = "sha256:76f8fef59cf2061dbd849bbce4fe49bdd820884385004b0ca59136ac3db129e4"}, + {file = "fabric-2.5.0-py2.py3-none-any.whl", hash = "sha256:160331934ea60036604928e792fa8e9f813266b098ef5562aa82b88527740389"}, + {file = "fabric-2.5.0.tar.gz", hash = "sha256:24842d7d51556adcabd885ac3cf5e1df73fc622a1708bf3667bf5927576cdfa6"}, ] filelock = [ - {file = "filelock-3.9.0-py3-none-any.whl", hash = "sha256:f58d535af89bb9ad5cd4df046f741f8553a418c01a7856bf0d173bbc9f6bd16d"}, - {file = "filelock-3.9.0.tar.gz", hash = "sha256:7b319f24340b51f55a2bf7a12ac0755a9b03e718311dac567a0f4f7fabd2f5de"}, + {file = "filelock-3.0.12-py3-none-any.whl", hash = "sha256:929b7d63ec5b7d6b71b0fa5ac14e030b3f70b75747cef1b10da9b879fef15836"}, + {file = "filelock-3.0.12.tar.gz", hash = "sha256:18d82244ee114f543149c66a6e0c14e9c4f8a1044b5cdaadd0f82159d6a6ff59"}, ] flake8 = [ {file = "flake8-3.8.2-py2.py3-none-any.whl", hash = "sha256:ccaa799ef9893cebe69fdfefed76865aeaefbb94cb8545617b2298786a4de9a5"}, {file = "flake8-3.8.2.tar.gz", hash = "sha256:c69ac1668e434d37a2d2880b3ca9aafd54b3a10a3ac1ab101d22f29e29cf8634"}, ] grpcio = [ - {file = "grpcio-1.54.2-cp310-cp310-linux_armv7l.whl", hash = "sha256:40e1cbf69d6741b40f750f3cccc64326f927ac6145a9914d33879e586002350c"}, - {file = "grpcio-1.54.2-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:2288d76e4d4aa7ef3fe7a73c1c470b66ea68e7969930e746a8cd8eca6ef2a2ea"}, - {file = "grpcio-1.54.2-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:c0e3155fc5335ec7b3b70f15230234e529ca3607b20a562b6c75fb1b1218874c"}, - {file = "grpcio-1.54.2-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bf88004fe086c786dc56ef8dd6cb49c026833fdd6f42cb853008bce3f907148"}, - {file = "grpcio-1.54.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be88c081e33f20630ac3343d8ad9f1125f32987968e9c8c75c051c9800896e8"}, - {file = "grpcio-1.54.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:33d40954199bddbb6a78f8f6f2b2082660f381cd2583ec860a6c2fa7c8400c08"}, - {file = "grpcio-1.54.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:b52d00d1793d290c81ad6a27058f5224a7d5f527867e5b580742e1bd211afeee"}, - {file = "grpcio-1.54.2-cp310-cp310-win32.whl", hash = "sha256:881d058c5ccbea7cc2c92085a11947b572498a27ef37d3eef4887f499054dca8"}, - {file = "grpcio-1.54.2-cp310-cp310-win_amd64.whl", hash = "sha256:0212e2f7fdf7592e4b9d365087da30cb4d71e16a6f213120c89b4f8fb35a3ab3"}, - {file = "grpcio-1.54.2-cp311-cp311-linux_armv7l.whl", hash = "sha256:1e623e0cf99a0ac114f091b3083a1848dbc64b0b99e181473b5a4a68d4f6f821"}, - {file = "grpcio-1.54.2-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:66233ccd2a9371158d96e05d082043d47dadb18cbb294dc5accfdafc2e6b02a7"}, - {file = "grpcio-1.54.2-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:4cb283f630624ebb16c834e5ac3d7880831b07cbe76cb08ab7a271eeaeb8943e"}, - {file = "grpcio-1.54.2-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2a1e601ee31ef30a9e2c601d0867e236ac54c922d32ed9f727b70dd5d82600d5"}, - {file = "grpcio-1.54.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f8da84bbc61a4e92af54dc96344f328e5822d574f767e9b08e1602bb5ddc254a"}, - {file = "grpcio-1.54.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:5008964885e8d23313c8e5ea0d44433be9bfd7e24482574e8cc43c02c02fc796"}, - {file = "grpcio-1.54.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a2f5a1f1080ccdc7cbaf1171b2cf384d852496fe81ddedeb882d42b85727f610"}, - {file = "grpcio-1.54.2-cp311-cp311-win32.whl", hash = "sha256:b74ae837368cfffeb3f6b498688a123e6b960951be4dec0e869de77e7fa0439e"}, - {file = "grpcio-1.54.2-cp311-cp311-win_amd64.whl", hash = "sha256:8cdbcbd687e576d48f7886157c95052825ca9948c0ed2afdc0134305067be88b"}, - {file = "grpcio-1.54.2-cp37-cp37m-linux_armv7l.whl", hash = "sha256:782f4f8662a2157c4190d0f99eaaebc602899e84fb1e562a944e5025929e351c"}, - {file = "grpcio-1.54.2-cp37-cp37m-macosx_10_10_universal2.whl", hash = "sha256:714242ad0afa63a2e6dabd522ae22e1d76e07060b5af2ddda5474ba4f14c2c94"}, - {file = "grpcio-1.54.2-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:f900ed4ad7a0f1f05d35f955e0943944d5a75f607a836958c6b8ab2a81730ef2"}, - {file = "grpcio-1.54.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96a41817d2c763b1d0b32675abeb9179aa2371c72aefdf74b2d2b99a1b92417b"}, - {file = "grpcio-1.54.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70fcac7b94f4c904152809a050164650ac81c08e62c27aa9f156ac518029ebbe"}, - {file = "grpcio-1.54.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:fd6c6c29717724acf9fc1847c4515d57e4dc12762452457b9cb37461f30a81bb"}, - {file = "grpcio-1.54.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:c2392f5b5d84b71d853918687d806c1aa4308109e5ca158a16e16a6be71041eb"}, - {file = "grpcio-1.54.2-cp37-cp37m-win_amd64.whl", hash = "sha256:51630c92591d6d3fe488a7c706bd30a61594d144bac7dee20c8e1ce78294f474"}, - {file = "grpcio-1.54.2-cp38-cp38-linux_armv7l.whl", hash = "sha256:b04202453941a63b36876a7172b45366dc0cde10d5fd7855c0f4a4e673c0357a"}, - {file = "grpcio-1.54.2-cp38-cp38-macosx_10_10_universal2.whl", hash = "sha256:89dde0ac72a858a44a2feb8e43dc68c0c66f7857a23f806e81e1b7cc7044c9cf"}, - {file = "grpcio-1.54.2-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:09d4bfd84686cd36fd11fd45a0732c7628308d094b14d28ea74a81db0bce2ed3"}, - {file = "grpcio-1.54.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7fc2b4edb938c8faa4b3c3ea90ca0dd89b7565a049e8e4e11b77e60e4ed2cc05"}, - {file = "grpcio-1.54.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:61f7203e2767800edee7a1e1040aaaf124a35ce0c7fe0883965c6b762defe598"}, - {file = "grpcio-1.54.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:e416c8baf925b5a1aff31f7f5aecc0060b25d50cce3a5a7255dc5cf2f1d4e5eb"}, - {file = "grpcio-1.54.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:dc80c9c6b608bf98066a038e0172013a49cfa9a08d53335aefefda2c64fc68f4"}, - {file = "grpcio-1.54.2-cp38-cp38-win32.whl", hash = "sha256:8d6192c37a30a115f4663592861f50e130caed33efc4eec24d92ec881c92d771"}, - {file = "grpcio-1.54.2-cp38-cp38-win_amd64.whl", hash = "sha256:46a057329938b08e5f0e12ea3d7aed3ecb20a0c34c4a324ef34e00cecdb88a12"}, - {file = "grpcio-1.54.2-cp39-cp39-linux_armv7l.whl", hash = "sha256:2296356b5c9605b73ed6a52660b538787094dae13786ba53080595d52df13a98"}, - {file = "grpcio-1.54.2-cp39-cp39-macosx_10_10_universal2.whl", hash = "sha256:c72956972e4b508dd39fdc7646637a791a9665b478e768ffa5f4fe42123d5de1"}, - {file = "grpcio-1.54.2-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:9bdbb7624d65dc0ed2ed8e954e79ab1724526f09b1efa88dcd9a1815bf28be5f"}, - {file = "grpcio-1.54.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4c44e1a765b31e175c391f22e8fc73b2a2ece0e5e6ff042743d8109b5d2eff9f"}, - {file = "grpcio-1.54.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5cc928cfe6c360c1df636cf7991ab96f059666ac7b40b75a769410cc6217df9c"}, - {file = "grpcio-1.54.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:a08920fa1a97d4b8ee5db2f31195de4a9def1a91bc003544eb3c9e6b8977960a"}, - {file = "grpcio-1.54.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4864f99aac207e3e45c5e26c6cbb0ad82917869abc2f156283be86c05286485c"}, - {file = "grpcio-1.54.2-cp39-cp39-win32.whl", hash = "sha256:b38b3de8cff5bc70f8f9c615f51b48eff7313fc9aca354f09f81b73036e7ddfa"}, - {file = "grpcio-1.54.2-cp39-cp39-win_amd64.whl", hash = "sha256:be48496b0e00460717225e7680de57c38be1d8629dc09dadcd1b3389d70d942b"}, - {file = "grpcio-1.54.2.tar.gz", hash = "sha256:50a9f075eeda5097aa9a182bb3877fe1272875e45370368ac0ee16ab9e22d019"}, + {file = "grpcio-1.27.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:dbec0a3a154dbf2eb85b38abaddf24964fa1c059ee0a4ad55d6f39211b1a4bca"}, + {file = "grpcio-1.27.2-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:1ef949b15a1f5f30651532a9b54edf3bd7c0b699a10931505fa2c80b2d395942"}, + {file = "grpcio-1.27.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:ed123037896a8db6709b8ad5acc0ed435453726ea0b63361d12de369624c2ab5"}, + {file = "grpcio-1.27.2-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:f9d632ce9fd485119c968ec6a7a343de698c5e014d17602ae2f110f1b05925ed"}, + {file = "grpcio-1.27.2-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:80c3d1ce8820dd819d1c9d6b63b6f445148480a831173b572a9174a55e7abd47"}, + {file = "grpcio-1.27.2-cp27-cp27m-win32.whl", hash = "sha256:07f82aefb4a56c7e1e52b78afb77d446847d27120a838a1a0489260182096045"}, + {file = "grpcio-1.27.2-cp27-cp27m-win_amd64.whl", hash = "sha256:28f27c64dd699b8b10f70da5f9320c1cffcaefca7dd76275b44571bd097f276c"}, + {file = "grpcio-1.27.2-cp27-cp27mu-linux_armv7l.whl", hash = "sha256:a25b84e10018875a0f294a7649d07c43e8bc3e6a821714e39e5cd607a36386d7"}, + {file = "grpcio-1.27.2-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:57949756a3ce1f096fa2b00f812755f5ab2effeccedb19feeb7d0deafa3d1de7"}, + {file = "grpcio-1.27.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:f3614dabd2cc8741850597b418bcf644d4f60e73615906c3acc407b78ff720b3"}, + {file = "grpcio-1.27.2-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:25c77692ea8c0929d4ad400ea9c3dcbcc4936cee84e437e0ef80da58fa73d88a"}, + {file = "grpcio-1.27.2-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:5dab393ab96b2ce4012823b2f2ed4ee907150424d2f02b97bd6f8dd8f17cc866"}, + {file = "grpcio-1.27.2-cp35-cp35m-linux_armv7l.whl", hash = "sha256:bb2987eb3af9bcf46019be39b82c120c3d35639a95bc4ee2d08f36ecdf469345"}, + {file = "grpcio-1.27.2-cp35-cp35m-macosx_10_7_intel.whl", hash = "sha256:6f328a3faaf81a2546a3022b3dfc137cc6d50d81082dbc0c94d1678943f05df3"}, + {file = "grpcio-1.27.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:5ebc13451246de82f130e8ee7e723e8d7ae1827f14b7b0218867667b1b12c88d"}, + {file = "grpcio-1.27.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:355bd7d7ce5ff2917d217f0e8ddac568cb7403e1ce1639b35a924db7d13a39b6"}, + {file = "grpcio-1.27.2-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:d1e5563e3b7f844dbc48d709c9e4a75647e11d0387cc1fa0c861d3e9d34bc844"}, + {file = "grpcio-1.27.2-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:1ec8fc865d8da6d0713e2092a27eee344cd54628b2c2065a0e77fff94df4ae00"}, + {file = "grpcio-1.27.2-cp35-cp35m-win32.whl", hash = "sha256:706e2dea3de33b0d8884c4d35ecd5911b4ff04d0697c4138096666ce983671a6"}, + {file = "grpcio-1.27.2-cp35-cp35m-win_amd64.whl", hash = "sha256:d18b4c8cacbb141979bb44355ee5813dd4d307e9d79b3a36d66eca7e0a203df8"}, + {file = "grpcio-1.27.2-cp36-cp36m-linux_armv7l.whl", hash = "sha256:02aef8ef1a5ac5f0836b543e462eb421df6048a7974211a906148053b8055ea6"}, + {file = "grpcio-1.27.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b78af4d42985ab3143d9882d0006f48d12f1bc4ba88e78f23762777c3ee64571"}, + {file = "grpcio-1.27.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:9c0669ba9aebad540fb05a33beb7e659ea6e5ca35833fc5229c20f057db760e8"}, + {file = "grpcio-1.27.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:68a149a0482d0bc697aac702ec6efb9d380e0afebf9484db5b7e634146528371"}, + {file = "grpcio-1.27.2-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:a71138366d57901597bfcc52af7f076ab61c046f409c7b429011cd68de8f9fe6"}, + {file = "grpcio-1.27.2-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:9e9cfe55dc7ac2aa47e0fd3285ff829685f96803197042c9d2f0fb44e4b39b2c"}, + {file = "grpcio-1.27.2-cp36-cp36m-win32.whl", hash = "sha256:d22c897b65b1408509099f1c3334bd3704f5e4eb7c0486c57d0e212f71cb8f54"}, + {file = "grpcio-1.27.2-cp36-cp36m-win_amd64.whl", hash = "sha256:c59b9280284b791377b3524c8e39ca7b74ae2881ba1a6c51b36f4f1bb94cee49"}, + {file = "grpcio-1.27.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6e545908bcc2ae28e5b190ce3170f92d0438cf26a82b269611390114de0106eb"}, + {file = "grpcio-1.27.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:6db7ded10b82592c472eeeba34b9f12d7b0ab1e2dcad12f081b08ebdea78d7d6"}, + {file = "grpcio-1.27.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:4d3b6e66f32528bf43ca2297caca768280a8e068820b1c3dca0fcf9f03c7d6f1"}, + {file = "grpcio-1.27.2-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:586d931736912865c9790c60ca2db29e8dc4eace160d5a79fec3e58df79a9386"}, + {file = "grpcio-1.27.2-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:c03ce53690fe492845e14f4ab7e67d5a429a06db99b226b5c7caa23081c1e2bb"}, + {file = "grpcio-1.27.2-cp37-cp37m-win32.whl", hash = "sha256:209927e65395feb449783943d62a3036982f871d7f4045fadb90b2d82b153ea8"}, + {file = "grpcio-1.27.2-cp37-cp37m-win_amd64.whl", hash = "sha256:9713578f187fb1c4d00ac554fe1edcc6b3ddd62f5d4eb578b81261115802df8e"}, + {file = "grpcio-1.27.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b4efde5524579a9ce0459ca35a57a48ca878a4973514b8bb88cb80d7c9d34c85"}, + {file = "grpcio-1.27.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:fb62996c61eeff56b59ab8abfcaa0859ec2223392c03d6085048b576b567459b"}, + {file = "grpcio-1.27.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:a22daaf30037b8e59d6968c76fe0f7ff062c976c7a026e92fbefc4c4bf3fc5a4"}, + {file = "grpcio-1.27.2-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:4a0a33ada3f6f94f855f92460896ef08c798dcc5f17d9364d1735c5adc9d7e4a"}, + {file = "grpcio-1.27.2-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:8111b61eee12d7af5c58f82f2c97c2664677a05df9225ef5cbc2f25398c8c454"}, + {file = "grpcio-1.27.2-cp38-cp38-win32.whl", hash = "sha256:5121fa96c79fc0ec81825091d0be5c16865f834f41b31da40b08ee60552f9961"}, + {file = "grpcio-1.27.2-cp38-cp38-win_amd64.whl", hash = "sha256:1cff47297ee614e7ef66243dc34a776883ab6da9ca129ea114a802c5e58af5c1"}, + {file = "grpcio-1.27.2.tar.gz", hash = "sha256:5ae532b93cf9ce5a2a549b74a2c35e3b690b171ece9358519b3039c7b84c887e"}, ] grpcio-tools = [ - {file = "grpcio-tools-1.54.2.tar.gz", hash = "sha256:e11c2c2aee53f340992e8e4d6a59172cbbbd0193f1351de98c4f810a5041d5ca"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-linux_armv7l.whl", hash = "sha256:2b96f5f17d3156058be247fd25b062b4768138665694c00b056659618b8fb418"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-macosx_12_0_universal2.whl", hash = "sha256:11939c9a8a39bd4815c7e88cb2fee48e1948775b59dbb06de8fcae5991e84f9e"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:129de5579f95d6a55dde185f188b4cbe19d1e2f1471425431d9930c31d300d70"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c4128c01cd6f5ea8f7c2db405dbfd8582cd967d36e6fa0952565436633b0e591"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e5c7292dd899ad8fa09a2be96719648cee37b17909fe8c12007e3bff58ebee61"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5ef30c2dbc63c1e0a462423ca4f95001814d26ef4fe66208e53fcf220ea3b717"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4abfc1892380abe6cef381eab86f9350cbd703bfe5d834095aa66fd91c886b6d"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-win32.whl", hash = "sha256:9acf443dcf6f68fbea3b7fb519e1716e014db1a561939f5aecc4abda74e4015d"}, - {file = "grpcio_tools-1.54.2-cp310-cp310-win_amd64.whl", hash = "sha256:21b9d2dee80f3f77e4097252e7f0db89772335a7300b72ab3d2e5c280872b1db"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-linux_armv7l.whl", hash = "sha256:7b24fbab9e7598518ce4549e066df00aab79c2bf9bedcdde23fb5ef6a3cf532f"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-macosx_10_10_universal2.whl", hash = "sha256:7baa210c20f71a242d9ae0e02734628f6948e8bee3bf538647894af427d28800"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:e3d0e5188ff8dbaddac2ee44731d36f09c4eccd3eac7328e547862c44f75cacd"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:27671c68c7e0e3c5ff9967f5500799f65a04e7b153b8ce10243c87c43199039d"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f39d8e8806b8857fb473ca6a9c7bd800b0673dfdb7283ff569af0345a222f32c"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:8e4c5a48f7b2e8798ce381498ee7b9a83c65b87ae66ee5022387394e5eb51771"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:4f285f8ef3de422717a36bd372239ae778b8cc112ce780ca3c7fe266dadc49fb"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-win32.whl", hash = "sha256:0f952c8a5c47e9204fe8959f7e9add149e660f6579d67cf65024c32736d34caf"}, - {file = "grpcio_tools-1.54.2-cp311-cp311-win_amd64.whl", hash = "sha256:3237149beec39e897fd62cef4aa1e1cd9422d7a95661d24bd0a79200b167e730"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-linux_armv7l.whl", hash = "sha256:0ab1b323905d449298523db5d34fa5bf5fffd645bd872b25598e2f8a01f0ea39"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-macosx_10_10_universal2.whl", hash = "sha256:7d7e6e8d62967b3f037f952620cb7381cc39a4bd31790c75fcfba56cc975d70b"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:7f4624ef2e76a3a5313c4e61a81be38bcc16b59a68a85d30758b84cd2102b161"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e543f457935ba7b763b121f1bf893974393b4d30065042f947f85a8d81081b80"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0239b929eb8b3b30b2397eef3b9abb245087754d77c3721e3be43c44796de87d"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:0de05c7698c655e9a240dc34ae91d6017b93143ac89e5b20046d7ca3bd09c27c"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:a3ce0b98fb581c471424d2cda45120f57658ed97677c6fec4d6decf5d7c1b976"}, - {file = "grpcio_tools-1.54.2-cp37-cp37m-win_amd64.whl", hash = "sha256:37393ef90674964175923afe3859fc5a208e1ece565f642b4f76a8c0224a0993"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-linux_armv7l.whl", hash = "sha256:8e4531267736d88fde1022b36dd42ed8163e3575bcbd12bfed96662872aa93fe"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-macosx_10_10_universal2.whl", hash = "sha256:a0b7049814442f918b522d66b1d015286afbeb9e6d141af54bbfafe31710a3c8"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:b80585e06c4f0082327eb5c9ad96fbdb2b0e7c14971ea5099fe78c22f4608451"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39fd530cfdf58dc05125775cc233b05554d553d27478f14ae5fd8a6306f0cb28"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3bb9ec4aea0f2b3006fb002fa59e5c10f92b48fc374619fbffd14d2b0e388c3e"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:d512de051342a576bb89777476d13c5266d9334cf4badb6468aed9dc8f5bdec1"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:1b8ee3099c51ce987fa8a08e6b93fc342b10228415dd96b5c0caa0387f636a6f"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-win32.whl", hash = "sha256:6037f123905dc0141f7c8383ca616ef0195e79cd3b4d82faaee789d4045e891b"}, - {file = "grpcio_tools-1.54.2-cp38-cp38-win_amd64.whl", hash = "sha256:10dd41862f579d185c60f629b5ee89103e216f63b576079d258d974d980bad87"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-linux_armv7l.whl", hash = "sha256:f6787d07fdab31a32c433c1ba34883dea6559d8a3fbe08fb93d834ca34136b71"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-macosx_10_10_universal2.whl", hash = "sha256:21b1467e31e44429d2a78b50135c9cdbd4b8f6d3b5cd548bc98985d3bdc352d0"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:30a49b8b168aced2a4ff40959e6c4383ad6cfd7a20839a47a215e9837eb722dc"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8742122782953d2fd038f0a199f047a24e941cc9718b1aac90876dbdb7167739"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:503ef1351c62fb1d6747eaf74932b609d8fdd4345b3591ef910adef8fa9969d0"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:72d15de4c4b6a764a76c4ae69d99c35f7a0751223688c3f7e62dfa95eb4f61be"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:df079479fb1b9e488334312e35ebbf30cbf5ecad6c56599f1a961800b33ab7c1"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-win32.whl", hash = "sha256:49c2846dcc4803476e839d8bd4db8845e928f19130e0ea86121f2d1f43d2b452"}, - {file = "grpcio_tools-1.54.2-cp39-cp39-win_amd64.whl", hash = "sha256:b82ca472db9c914c44e39a41e9e8bd3ed724523dd7aff5ce37592b8d16920ed9"}, + {file = "grpcio-tools-1.27.2.tar.gz", hash = "sha256:845a51305af9fc7f9e2078edaec9a759153195f6cf1fbb12b1fa6f077e56b260"}, + {file = "grpcio_tools-1.27.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:7a2d5fb558ac153a326e742ebfd7020eb781c43d3ffd920abd42b2e6c6fdfb37"}, + {file = "grpcio_tools-1.27.2-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:99961156a36aae4a402d6b14c1e7efde642794b3ddbf32c51db0cb3a199e8b11"}, + {file = "grpcio_tools-1.27.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:069826dd02ce1886444cf4519c4fe1b05ac9ef41491f26e97400640531db47f6"}, + {file = "grpcio_tools-1.27.2-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:fae91f30dc050a8d0b32d20dc700e6092f0bd2138d83e9570fff3f0372c1b27e"}, + {file = "grpcio_tools-1.27.2-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:a14dc7a36c845991d908a7179502ca47bcba5ae1817c4426ce68cf2c97b20ad9"}, + {file = "grpcio_tools-1.27.2-cp27-cp27m-win32.whl", hash = "sha256:d1a5e5fa47ba9557a7d3b31605631805adc66cdba9d95b5d10dfc52cca1fed53"}, + {file = "grpcio_tools-1.27.2-cp27-cp27m-win_amd64.whl", hash = "sha256:7b54b283ec83190680903a9037376dc915e1f03852a2d574ba4d981b7a1fd3d0"}, + {file = "grpcio_tools-1.27.2-cp27-cp27mu-linux_armv7l.whl", hash = "sha256:4698c6b6a57f73b14d91a542c69ff33a2da8729691b7060a5d7f6383624d045e"}, + {file = "grpcio_tools-1.27.2-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:87e8ca2c2d2d3e09b2a2bed5d740d7b3e64028dafb7d6be543b77eec85590736"}, + {file = "grpcio_tools-1.27.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:bd7f59ff1252a3db8a143b13ea1c1e93d4b8cf4b852eb48b22ef1e6942f62a84"}, + {file = "grpcio_tools-1.27.2-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:a8f892378b0b02526635b806f59141abbb429d19bec56e869e04f396502c9651"}, + {file = "grpcio_tools-1.27.2-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:69c4a63919b9007e845d9f8980becd2f89d808a4a431ca32b9723ee37b521cb1"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-linux_armv7l.whl", hash = "sha256:dcbc06556f3713a9348c4fce02d05d91e678fc320fb2bcf0ddf8e4bb11d17867"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-macosx_10_9_intel.whl", hash = "sha256:16dc3fad04fe18d50777c56af7b2d9b9984cd1cfc71184646eb431196d1645c6"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:1de5a273eaffeb3d126a63345e9e848ea7db740762f700eb8b5d84c5e3e7687d"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6016c07d6566e3109a3c032cf3861902d66501ecc08a5a84c47e43027302f367"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:915a695bc112517af48126ee0ecdb6aff05ed33f3eeef28f0d076f1f6b52ef5e"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:ea4b3ad696d976d5eac74ec8df9a2c692113e455446ee38d5b3bd87f8e034fa6"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-win32.whl", hash = "sha256:a140bf853edb2b5e8692fe94869e3e34077d7599170c113d07a58286c604f4fe"}, + {file = "grpcio_tools-1.27.2-cp35-cp35m-win_amd64.whl", hash = "sha256:77e25c241e33b75612f2aa62985f746c6f6803ec4e452da508bb7f8d90a69db4"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-linux_armv7l.whl", hash = "sha256:5fd7efc2fd3370bd2c72dc58f31a407a5dff5498befa145da211b2e8c6a52c63"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9ba88c2d99bcaf7b9cb720925e3290d73b2367d238c5779363fd5598b2dc98c7"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:b56caecc16307b088a431a4038c3b3bb7d0e7f9988cbd0e9fa04ac937455ea38"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f8514453411d72cc3cf7d481f2b6057e5b7436736d0cd39ee2b2f72088bbf497"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:c1bb8f47d58e9f7c4825abfe01e6b85eda53c8b31d2267ca4cddf3c4d0829b80"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:e17b2e0936b04ced99769e26111e1e86ba81619d1b2691b1364f795e45560953"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-win32.whl", hash = "sha256:520b7dafddd0f82cb7e4f6e9c6ba1049aa804d0e207870def9fe7f94d1e14090"}, + {file = "grpcio_tools-1.27.2-cp36-cp36m-win_amd64.whl", hash = "sha256:ee50b0cf0d28748ef9f941894eb50fc464bd61b8e96aaf80c5056bea9b80d580"}, + {file = "grpcio_tools-1.27.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:627c91923df75091d8c4d244af38d5ab7ed8d786d480751d6c2b9267fbb92fe0"}, + {file = "grpcio_tools-1.27.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:ef624b6134aef737b3daa4fb7e806cb8c5749efecd0b1fa9ce4f7e060c7a0221"}, + {file = "grpcio_tools-1.27.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:e6932518db389ede8bf06b4119bbd3e17f42d4626e72dec2b8955b20ec732cb6"}, + {file = "grpcio_tools-1.27.2-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:43a1573400527a23e4174d88604fde7a9d9a69bf9473c21936b7f409858f8ebb"}, + {file = "grpcio_tools-1.27.2-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:57f8b9e2c7f55cd45f6dd930d6de61deb42d3eb7f9788137fbc7155cf724132a"}, + {file = "grpcio_tools-1.27.2-cp37-cp37m-win32.whl", hash = "sha256:2ca280af2cae1a014a238057bd3c0a254527569a6a9169a01c07f0590081d530"}, + {file = "grpcio_tools-1.27.2-cp37-cp37m-win_amd64.whl", hash = "sha256:59fbeb5bb9a7b94eb61642ac2cee1db5233b8094ca76fc56d4e0c6c20b5dd85f"}, + {file = "grpcio_tools-1.27.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:00c5080cfb197ed20ecf0d0ff2d07f1fc9c42c724cad21c40ff2d048de5712b1"}, + {file = "grpcio_tools-1.27.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:f5450aa904e720f9c6407b59e96a8951ed6a95463f49444b6d2594b067d39588"}, + {file = "grpcio_tools-1.27.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:aaa5ae26883c3d58d1a4323981f96b941fa09bb8f0f368d97c6225585280cf04"}, + {file = "grpcio_tools-1.27.2-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:1266b577abe7c720fd16a83d0a4999a192e87c4a98fc9f97e0b99b106b3e155f"}, + {file = "grpcio_tools-1.27.2-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:a3d2aec4b09c8e59fee8b0d1ed668d09e8c48b738f03f5d8401d7eb409111c47"}, + {file = "grpcio_tools-1.27.2-cp38-cp38-win32.whl", hash = "sha256:8e7738a4b93842bca1158cde81a3587c9b7111823e40a1ddf73292ca9d58e08b"}, + {file = "grpcio_tools-1.27.2-cp38-cp38-win_amd64.whl", hash = "sha256:84724458c86ff9b14c29b49e321f34d80445b379f4cd4d0494c694b49b1d6f88"}, ] identify = [ - {file = "identify-2.5.18-py2.py3-none-any.whl", hash = "sha256:93aac7ecf2f6abf879b8f29a8002d3c6de7086b8c28d88e1ad15045a15ab63f9"}, - {file = "identify-2.5.18.tar.gz", hash = "sha256:89e144fa560cc4cffb6ef2ab5e9fb18ed9f9b3cb054384bab4b95c12f6c309fe"}, + {file = "identify-1.4.28-py2.py3-none-any.whl", hash = "sha256:69c4769f085badafd0e04b1763e847258cbbf6d898e8678ebffc91abdb86f6c6"}, + {file = "identify-1.4.28.tar.gz", hash = "sha256:d6ae6daee50ba1b493e9ca4d36a5edd55905d2cf43548fdc20b2a14edef102e7"}, ] -iniconfig = [ - {file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"}, - {file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"}, +importlib-metadata = [ + {file = "importlib_metadata-1.7.0-py2.py3-none-any.whl", hash = "sha256:dc15b2969b4ce36305c51eebe62d418ac7791e9a157911d58bfb1f9ccd8e2070"}, + {file = "importlib_metadata-1.7.0.tar.gz", hash = "sha256:90bb658cdbbf6d1735b6341ce708fc7024a3e14e99ffdc5783edea9f9b077f83"}, +] +importlib-resources = [ + {file = "importlib_resources-3.0.0-py2.py3-none-any.whl", hash = "sha256:d028f66b66c0d5732dae86ba4276999855e162a749c92620a38c1d779ed138a7"}, + {file = "importlib_resources-3.0.0.tar.gz", hash = "sha256:19f745a6eca188b490b1428c8d1d4a0d2368759f32370ea8fb89cad2ab1106c3"}, ] invoke = [ - {file = "invoke-1.7.3-py3-none-any.whl", hash = "sha256:d9694a865764dd3fd91f25f7e9a97fb41666e822bbb00e670091e3f43933574d"}, - {file = "invoke-1.7.3.tar.gz", hash = "sha256:41b428342d466a82135d5ab37119685a989713742be46e42a3a399d685579314"}, + {file = "invoke-1.4.1-py2-none-any.whl", hash = "sha256:93e12876d88130c8e0d7fd6618dd5387d6b36da55ad541481dfa5e001656f134"}, + {file = "invoke-1.4.1-py3-none-any.whl", hash = "sha256:87b3ef9d72a1667e104f89b159eaf8a514dbf2f3576885b2bbdefe74c3fb2132"}, + {file = "invoke-1.4.1.tar.gz", hash = "sha256:de3f23bfe669e3db1085789fd859eb8ca8e0c5d9c20811e2407fa042e8a5e15d"}, ] isort = [ {file = "isort-4.3.21-py2.py3-none-any.whl", hash = "sha256:6e811fcb295968434526407adb8796944f1988c5b65e8139058f2014cbe100fd"}, {file = "isort-4.3.21.tar.gz", hash = "sha256:54da7e92468955c4fceacd0c86bd0ec997b0e1ee80d97f67c35a78b719dccab1"}, ] lxml = [ - {file = "lxml-4.9.1-cp27-cp27m-macosx_10_15_x86_64.whl", hash = "sha256:98cafc618614d72b02185ac583c6f7796202062c41d2eeecdf07820bad3295ed"}, - {file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c62e8dd9754b7debda0c5ba59d34509c4688f853588d75b53c3791983faa96fc"}, - {file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:21fb3d24ab430fc538a96e9fbb9b150029914805d551deeac7d7822f64631dfc"}, - {file = "lxml-4.9.1-cp27-cp27m-win32.whl", hash = "sha256:86e92728ef3fc842c50a5cb1d5ba2bc66db7da08a7af53fb3da79e202d1b2cd3"}, - {file = "lxml-4.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4cfbe42c686f33944e12f45a27d25a492cc0e43e1dc1da5d6a87cbcaf2e95627"}, - {file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dad7b164905d3e534883281c050180afcf1e230c3d4a54e8038aa5cfcf312b84"}, - {file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a614e4afed58c14254e67862456d212c4dcceebab2eaa44d627c2ca04bf86837"}, - {file = "lxml-4.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:f9ced82717c7ec65a67667bb05865ffe38af0e835cdd78728f1209c8fffe0cad"}, - {file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d9fc0bf3ff86c17348dfc5d322f627d78273eba545db865c3cd14b3f19e57fa5"}, - {file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:e5f66bdf0976ec667fc4594d2812a00b07ed14d1b44259d19a41ae3fff99f2b8"}, - {file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:fe17d10b97fdf58155f858606bddb4e037b805a60ae023c009f760d8361a4eb8"}, - {file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8caf4d16b31961e964c62194ea3e26a0e9561cdf72eecb1781458b67ec83423d"}, - {file = "lxml-4.9.1-cp310-cp310-win32.whl", hash = "sha256:4780677767dd52b99f0af1f123bc2c22873d30b474aa0e2fc3fe5e02217687c7"}, - {file = "lxml-4.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:b122a188cd292c4d2fcd78d04f863b789ef43aa129b233d7c9004de08693728b"}, - {file = "lxml-4.9.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:be9eb06489bc975c38706902cbc6888f39e946b81383abc2838d186f0e8b6a9d"}, - {file = "lxml-4.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f1be258c4d3dc609e654a1dc59d37b17d7fef05df912c01fc2e15eb43a9735f3"}, - {file = "lxml-4.9.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:927a9dd016d6033bc12e0bf5dee1dde140235fc8d0d51099353c76081c03dc29"}, - {file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9232b09f5efee6a495a99ae6824881940d6447debe272ea400c02e3b68aad85d"}, - {file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:04da965dfebb5dac2619cb90fcf93efdb35b3c6994fea58a157a834f2f94b318"}, - {file = "lxml-4.9.1-cp35-cp35m-win32.whl", hash = "sha256:4d5bae0a37af799207140652a700f21a85946f107a199bcb06720b13a4f1f0b7"}, - {file = "lxml-4.9.1-cp35-cp35m-win_amd64.whl", hash = "sha256:4878e667ebabe9b65e785ac8da4d48886fe81193a84bbe49f12acff8f7a383a4"}, - {file = "lxml-4.9.1-cp36-cp36m-macosx_10_15_x86_64.whl", hash = "sha256:1355755b62c28950f9ce123c7a41460ed9743c699905cbe664a5bcc5c9c7c7fb"}, - {file = "lxml-4.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:bcaa1c495ce623966d9fc8a187da80082334236a2a1c7e141763ffaf7a405067"}, - {file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6eafc048ea3f1b3c136c71a86db393be36b5b3d9c87b1c25204e7d397cee9536"}, - {file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:13c90064b224e10c14dcdf8086688d3f0e612db53766e7478d7754703295c7c8"}, - {file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:206a51077773c6c5d2ce1991327cda719063a47adc02bd703c56a662cdb6c58b"}, - {file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e8f0c9d65da595cfe91713bc1222af9ecabd37971762cb830dea2fc3b3bb2acf"}, - {file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:8f0a4d179c9a941eb80c3a63cdb495e539e064f8054230844dcf2fcb812b71d3"}, - {file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:830c88747dce8a3e7525defa68afd742b4580df6aa2fdd6f0855481e3994d391"}, - {file = "lxml-4.9.1-cp36-cp36m-win32.whl", hash = "sha256:1e1cf47774373777936c5aabad489fef7b1c087dcd1f426b621fda9dcc12994e"}, - {file = "lxml-4.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:5974895115737a74a00b321e339b9c3f45c20275d226398ae79ac008d908bff7"}, - {file = "lxml-4.9.1-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:1423631e3d51008871299525b541413c9b6c6423593e89f9c4cfbe8460afc0a2"}, - {file = "lxml-4.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:2aaf6a0a6465d39b5ca69688fce82d20088c1838534982996ec46633dc7ad6cc"}, - {file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:9f36de4cd0c262dd9927886cc2305aa3f2210db437aa4fed3fb4940b8bf4592c"}, - {file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:ae06c1e4bc60ee076292e582a7512f304abdf6c70db59b56745cca1684f875a4"}, - {file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:57e4d637258703d14171b54203fd6822fda218c6c2658a7d30816b10995f29f3"}, - {file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6d279033bf614953c3fc4a0aa9ac33a21e8044ca72d4fa8b9273fe75359d5cca"}, - {file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:a60f90bba4c37962cbf210f0188ecca87daafdf60271f4c6948606e4dabf8785"}, - {file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:6ca2264f341dd81e41f3fffecec6e446aa2121e0b8d026fb5130e02de1402785"}, - {file = "lxml-4.9.1-cp37-cp37m-win32.whl", hash = "sha256:27e590352c76156f50f538dbcebd1925317a0f70540f7dc8c97d2931c595783a"}, - {file = "lxml-4.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:eea5d6443b093e1545ad0210e6cf27f920482bfcf5c77cdc8596aec73523bb7e"}, - {file = "lxml-4.9.1-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:f05251bbc2145349b8d0b77c0d4e5f3b228418807b1ee27cefb11f69ed3d233b"}, - {file = "lxml-4.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:487c8e61d7acc50b8be82bda8c8d21d20e133c3cbf41bd8ad7eb1aaeb3f07c97"}, - {file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:8d1a92d8e90b286d491e5626af53afef2ba04da33e82e30744795c71880eaa21"}, - {file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:b570da8cd0012f4af9fa76a5635cd31f707473e65a5a335b186069d5c7121ff2"}, - {file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ef87fca280fb15342726bd5f980f6faf8b84a5287fcc2d4962ea8af88b35130"}, - {file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:93e414e3206779ef41e5ff2448067213febf260ba747fc65389a3ddaa3fb8715"}, - {file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6653071f4f9bac46fbc30f3c7838b0e9063ee335908c5d61fb7a4a86c8fd2036"}, - {file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:32a73c53783becdb7eaf75a2a1525ea8e49379fb7248c3eeefb9412123536387"}, - {file = "lxml-4.9.1-cp38-cp38-win32.whl", hash = "sha256:1a7c59c6ffd6ef5db362b798f350e24ab2cfa5700d53ac6681918f314a4d3b94"}, - {file = "lxml-4.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:1436cf0063bba7888e43f1ba8d58824f085410ea2025befe81150aceb123e345"}, - {file = "lxml-4.9.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4beea0f31491bc086991b97517b9683e5cfb369205dac0148ef685ac12a20a67"}, - {file = "lxml-4.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:41fb58868b816c202e8881fd0f179a4644ce6e7cbbb248ef0283a34b73ec73bb"}, - {file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:bd34f6d1810d9354dc7e35158aa6cc33456be7706df4420819af6ed966e85448"}, - {file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:edffbe3c510d8f4bf8640e02ca019e48a9b72357318383ca60e3330c23aaffc7"}, - {file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6d949f53ad4fc7cf02c44d6678e7ff05ec5f5552b235b9e136bd52e9bf730b91"}, - {file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:079b68f197c796e42aa80b1f739f058dcee796dc725cc9a1be0cdb08fc45b000"}, - {file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9c3a88d20e4fe4a2a4a84bf439a5ac9c9aba400b85244c63a1ab7088f85d9d25"}, - {file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4e285b5f2bf321fc0857b491b5028c5f276ec0c873b985d58d7748ece1d770dd"}, - {file = "lxml-4.9.1-cp39-cp39-win32.whl", hash = "sha256:ef72013e20dd5ba86a8ae1aed7f56f31d3374189aa8b433e7b12ad182c0d2dfb"}, - {file = "lxml-4.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:10d2017f9150248563bb579cd0d07c61c58da85c922b780060dcc9a3aa9f432d"}, - {file = "lxml-4.9.1-pp37-pypy37_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0538747a9d7827ce3e16a8fdd201a99e661c7dee3c96c885d8ecba3c35d1032c"}, - {file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:0645e934e940107e2fdbe7c5b6fb8ec6232444260752598bc4d09511bd056c0b"}, - {file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6daa662aba22ef3258934105be2dd9afa5bb45748f4f702a3b39a5bf53a1f4dc"}, - {file = "lxml-4.9.1-pp38-pypy38_pp73-macosx_10_15_x86_64.whl", hash = "sha256:603a464c2e67d8a546ddaa206d98e3246e5db05594b97db844c2f0a1af37cf5b"}, - {file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c4b2e0559b68455c085fb0f6178e9752c4be3bba104d6e881eb5573b399d1eb2"}, - {file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0f3f0059891d3254c7b5fb935330d6db38d6519ecd238ca4fce93c234b4a0f73"}, - {file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c852b1530083a620cb0de5f3cd6826f19862bafeaf77586f1aef326e49d95f0c"}, - {file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:287605bede6bd36e930577c5925fcea17cb30453d96a7b4c63c14a257118dbb9"}, - {file = "lxml-4.9.1.tar.gz", hash = "sha256:fe749b052bb7233fe5d072fcb549221a8cb1a16725c47c37e42b0b9cb3ff2c3f"}, + {file = "lxml-4.5.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:ee2be8b8f72a2772e72ab926a3bccebf47bb727bda41ae070dc91d1fb759b726"}, + {file = "lxml-4.5.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:fadd2a63a2bfd7fb604508e553d1cf68eca250b2fbdbd81213b5f6f2fbf23529"}, + {file = "lxml-4.5.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:4f282737d187ae723b2633856085c31ae5d4d432968b7f3f478a48a54835f5c4"}, + {file = "lxml-4.5.1-cp27-cp27m-win32.whl", hash = "sha256:7fd88cb91a470b383aafad554c3fe1ccf6dfb2456ff0e84b95335d582a799804"}, + {file = "lxml-4.5.1-cp27-cp27m-win_amd64.whl", hash = "sha256:0790ddca3f825dd914978c94c2545dbea5f56f008b050e835403714babe62a5f"}, + {file = "lxml-4.5.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:9144ce36ca0824b29ebc2e02ca186e54040ebb224292072250467190fb613b96"}, + {file = "lxml-4.5.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:a636346c6c0e1092ffc202d97ec1843a75937d8c98aaf6771348ad6422e44bb0"}, + {file = "lxml-4.5.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:f95d28193c3863132b1f55c1056036bf580b5a488d908f7d22a04ace8935a3a9"}, + {file = "lxml-4.5.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:b26719890c79a1dae7d53acac5f089d66fd8cc68a81f4e4bd355e45470dc25e1"}, + {file = "lxml-4.5.1-cp35-cp35m-win32.whl", hash = "sha256:a9e3b8011388e7e373565daa5e92f6c9cb844790dc18e43073212bb3e76f7007"}, + {file = "lxml-4.5.1-cp35-cp35m-win_amd64.whl", hash = "sha256:2754d4406438c83144f9ffd3628bbe2dcc6d62b20dbc5c1ec4bc4385e5d44b42"}, + {file = "lxml-4.5.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:925baf6ff1ef2c45169f548cc85204433e061360bfa7d01e1be7ae38bef73194"}, + {file = "lxml-4.5.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:a87dbee7ad9dce3aaefada2081843caf08a44a8f52e03e0a4cc5819f8398f2f4"}, + {file = "lxml-4.5.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:51bb4edeb36d24ec97eb3e6a6007be128b720114f9a875d6b370317d62ac80b9"}, + {file = "lxml-4.5.1-cp36-cp36m-win32.whl", hash = "sha256:c79e5debbe092e3c93ca4aee44c9a7631bdd407b2871cb541b979fd350bbbc29"}, + {file = "lxml-4.5.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b7462cdab6fffcda853338e1741ce99706cdf880d921b5a769202ea7b94e8528"}, + {file = "lxml-4.5.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:06748c7192eab0f48e3d35a7adae609a329c6257495d5e53878003660dc0fec6"}, + {file = "lxml-4.5.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:1aa7a6197c1cdd65d974f3e4953764eee3d9c7b67e3966616b41fab7f8f516b7"}, + {file = "lxml-4.5.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:afb53edf1046599991fb4a7d03e601ab5f5422a5435c47ee6ba91ec3b61416a6"}, + {file = "lxml-4.5.1-cp37-cp37m-win32.whl", hash = "sha256:2d1ddce96cf15f1254a68dba6935e6e0f1fe39247de631c115e84dd404a6f031"}, + {file = "lxml-4.5.1-cp37-cp37m-win_amd64.whl", hash = "sha256:22c6d34fdb0e65d5f782a4d1a1edb52e0a8365858dafb1c08cb1d16546cf0786"}, + {file = "lxml-4.5.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c47a8a5d00060122ca5908909478abce7bbf62d812e3fc35c6c802df8fb01fe7"}, + {file = "lxml-4.5.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:b77975465234ff49fdad871c08aa747aae06f5e5be62866595057c43f8d2f62c"}, + {file = "lxml-4.5.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:2b02c106709466a93ed424454ce4c970791c486d5fcdf52b0d822a7e29789626"}, + {file = "lxml-4.5.1-cp38-cp38-win32.whl", hash = "sha256:7eee37c1b9815e6505847aa5e68f192e8a1b730c5c7ead39ff317fde9ce29448"}, + {file = "lxml-4.5.1-cp38-cp38-win_amd64.whl", hash = "sha256:d8d40e0121ca1606aa9e78c28a3a7d88a05c06b3ca61630242cded87d8ce55fa"}, + {file = "lxml-4.5.1.tar.gz", hash = "sha256:27ee0faf8077c7c1a589573b1450743011117f1aa1a91d5ae776bbc5ca6070f2"}, ] -Mako = [ - {file = "Mako-1.2.3-py3-none-any.whl", hash = "sha256:c413a086e38cd885088d5e165305ee8eed04e8b3f8f62df343480da0a385735f"}, - {file = "Mako-1.2.3.tar.gz", hash = "sha256:7fde96466fcfeedb0eed94f187f20b23d85e4cb41444be0e542e2c8c65c396cd"}, +mako = [ + {file = "Mako-1.1.3-py2.py3-none-any.whl", hash = "sha256:93729a258e4ff0747c876bd9e20df1b9758028946e976324ccd2d68245c7b6a9"}, + {file = "Mako-1.1.3.tar.gz", hash = "sha256:8195c8c1400ceb53496064314c6736719c6f25e7479cd24c77be3d9361cddc27"}, ] -MarkupSafe = [ - {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:665a36ae6f8f20a4676b53224e33d456a6f5a72657d9c83c2aa00765072f31f7"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:340bea174e9761308703ae988e982005aedf427de816d1afe98147668cc03036"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22152d00bf4a9c7c83960521fc558f55a1adbc0631fbb00a9471e097b19d72e1"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28057e985dace2f478e042eaa15606c7efccb700797660629da387eb289b9323"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca244fa73f50a800cf8c3ebf7fd93149ec37f5cb9596aa8873ae2c1d23498601"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d9d971ec1e79906046aa3ca266de79eac42f1dbf3612a05dc9368125952bd1a1"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e007132af78ea9df29495dbf7b5824cb71648d7133cf7848a2a5dd00d36f9ff"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7313ce6a199651c4ed9d7e4cfb4aa56fe923b1adf9af3b420ee14e6d9a73df65"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-win32.whl", hash = "sha256:c4a549890a45f57f1ebf99c067a4ad0cb423a05544accaf2b065246827ed9603"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:835fb5e38fd89328e9c81067fd642b3593c33e1e17e2fdbf77f5676abb14a156"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2ec4f2d48ae59bbb9d1f9d7efb9236ab81429a764dedca114f5fdabbc3788013"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:608e7073dfa9e38a85d38474c082d4281f4ce276ac0010224eaba11e929dd53a"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65608c35bfb8a76763f37036547f7adfd09270fbdbf96608be2bead319728fcd"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2bfb563d0211ce16b63c7cb9395d2c682a23187f54c3d79bfec33e6705473c6"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da25303d91526aac3672ee6d49a2f3db2d9502a4a60b55519feb1a4c7714e07d"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9cad97ab29dfc3f0249b483412c85c8ef4766d96cdf9dcf5a1e3caa3f3661cf1"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:085fd3201e7b12809f9e6e9bc1e5c96a368c8523fad5afb02afe3c051ae4afcc"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1bea30e9bf331f3fef67e0a3877b2288593c98a21ccb2cf29b74c581a4eb3af0"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-win32.whl", hash = "sha256:7df70907e00c970c60b9ef2938d894a9381f38e6b9db73c5be35e59d92e06625"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:e55e40ff0cc8cc5c07996915ad367fa47da6b3fc091fdadca7f5403239c5fec3"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a6e40afa7f45939ca356f348c8e23048e02cb109ced1eb8420961b2f40fb373a"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf877ab4ed6e302ec1d04952ca358b381a882fbd9d1b07cccbfd61783561f98a"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63ba06c9941e46fa389d389644e2d8225e0e3e5ebcc4ff1ea8506dce646f8c8a"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f1cd098434e83e656abf198f103a8207a8187c0fc110306691a2e94a78d0abb2"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:55f44b440d491028addb3b88f72207d71eeebfb7b5dbf0643f7c023ae1fba619"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a6f2fcca746e8d5910e18782f976489939d54a91f9411c32051b4aab2bd7c513"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:0b462104ba25f1ac006fdab8b6a01ebbfbce9ed37fd37fd4acd70c67c973e460"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-win32.whl", hash = "sha256:7668b52e102d0ed87cb082380a7e2e1e78737ddecdde129acadb0eccc5423859"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6d6607f98fcf17e534162f0709aaad3ab7a96032723d8ac8750ffe17ae5a0666"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a806db027852538d2ad7555b203300173dd1b77ba116de92da9afbc3a3be3eed"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a4abaec6ca3ad8660690236d11bfe28dfd707778e2442b45addd2f086d6ef094"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f03a532d7dee1bed20bc4884194a16160a2de9ffc6354b3878ec9682bb623c54"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cf06cdc1dda95223e9d2d3c58d3b178aa5dacb35ee7e3bbac10e4e1faacb419"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22731d79ed2eb25059ae3df1dfc9cb1546691cc41f4e3130fe6bfbc3ecbbecfa"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f8ffb705ffcf5ddd0e80b65ddf7bed7ee4f5a441ea7d3419e861a12eaf41af58"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8db032bf0ce9022a8e41a22598eefc802314e81b879ae093f36ce9ddf39ab1ba"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2298c859cfc5463f1b64bd55cb3e602528db6fa0f3cfd568d3605c50678f8f03"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-win32.whl", hash = "sha256:50c42830a633fa0cf9e7d27664637532791bfc31c731a87b202d2d8ac40c3ea2"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:bb06feb762bade6bf3c8b844462274db0c76acc95c52abe8dbed28ae3d44a147"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99625a92da8229df6d44335e6fcc558a5037dd0a760e11d84be2260e6f37002f"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bca7e26c1dd751236cfb0c6c72d4ad61d986e9a41bbf76cb445f69488b2a2bd"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40627dcf047dadb22cd25ea7ecfe9cbf3bbbad0482ee5920b582f3809c97654f"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40dfd3fefbef579ee058f139733ac336312663c6706d1163b82b3003fb1925c4"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:090376d812fb6ac5f171e5938e82e7f2d7adc2b629101cec0db8b267815c85e2"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2e7821bffe00aa6bd07a23913b7f4e01328c3d5cc0b40b36c0bd81d362faeb65"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c0a33bc9f02c2b17c3ea382f91b4db0e6cde90b63b296422a939886a7a80de1c"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:b8526c6d437855442cdd3d87eede9c425c4445ea011ca38d937db299382e6fa3"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-win32.whl", hash = "sha256:137678c63c977754abe9086a3ec011e8fd985ab90631145dfb9294ad09c102a7"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:0576fe974b40a400449768941d5d0858cc624e3249dfd1e0c33674e5c7ca7aed"}, - {file = "MarkupSafe-2.1.2.tar.gz", hash = "sha256:abcabc8c2b26036d62d4c746381a6f7cf60aafcc653198ad678306986b09450d"}, +markupsafe = [ + {file = "MarkupSafe-1.1.1-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161"}, + {file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"}, + {file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183"}, + {file = "MarkupSafe-1.1.1-cp27-cp27m-win32.whl", hash = "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b"}, + {file = "MarkupSafe-1.1.1-cp27-cp27m-win_amd64.whl", hash = "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e"}, + {file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f"}, + {file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1"}, + {file = "MarkupSafe-1.1.1-cp34-cp34m-macosx_10_6_intel.whl", hash = "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5"}, + {file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_i686.whl", hash = "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1"}, + {file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735"}, + {file = "MarkupSafe-1.1.1-cp34-cp34m-win32.whl", hash = "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21"}, + {file = "MarkupSafe-1.1.1-cp34-cp34m-win_amd64.whl", hash = "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235"}, + {file = "MarkupSafe-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b"}, + {file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f"}, + {file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905"}, + {file = "MarkupSafe-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1"}, + {file = "MarkupSafe-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d"}, + {file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_6_intel.whl", hash = "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff"}, + {file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473"}, + {file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e"}, + {file = "MarkupSafe-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66"}, + {file = "MarkupSafe-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5"}, + {file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_6_intel.whl", hash = "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d"}, + {file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e"}, + {file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6"}, + {file = "MarkupSafe-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2"}, + {file = "MarkupSafe-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c"}, + {file = "MarkupSafe-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15"}, + {file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2"}, + {file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42"}, + {file = "MarkupSafe-1.1.1-cp38-cp38-win32.whl", hash = "sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b"}, + {file = "MarkupSafe-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be"}, + {file = "MarkupSafe-1.1.1.tar.gz", hash = "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b"}, ] mccabe = [ {file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"}, @@ -997,263 +898,175 @@ mock = [ {file = "mock-4.0.2-py3-none-any.whl", hash = "sha256:3f9b2c0196c60d21838f307f5825a7b86b678cedc58ab9e50a8988187b4d81e0"}, {file = "mock-4.0.2.tar.gz", hash = "sha256:dd33eb70232b6118298d516bbcecd26704689c386594f0f3c4f13867b2c56f72"}, ] -mypy-extensions = [ - {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"}, - {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"}, +more-itertools = [ + {file = "more-itertools-8.4.0.tar.gz", hash = "sha256:68c70cc7167bdf5c7c9d8f6954a7837089c6a36bf565383919bb595efb8a17e5"}, + {file = "more_itertools-8.4.0-py3-none-any.whl", hash = "sha256:b78134b2063dd214000685165d81c154522c3ee0a1c0d4d113c80361c234c5a2"}, ] netaddr = [ {file = "netaddr-0.7.19-py2.py3-none-any.whl", hash = "sha256:56b3558bd71f3f6999e4c52e349f38660e54a7a8a9943335f73dfc96883e08ca"}, {file = "netaddr-0.7.19.tar.gz", hash = "sha256:38aeec7cdd035081d3a4c306394b19d677623bf76fa0913f6695127c7753aefd"}, ] nodeenv = [ - {file = "nodeenv-1.7.0-py2.py3-none-any.whl", hash = "sha256:27083a7b96a25f2f5e1d8cb4b6317ee8aeda3bdd121394e5ac54e498028a042e"}, - {file = "nodeenv-1.7.0.tar.gz", hash = "sha256:e0e7f7dfb85fc5394c6fe1e8fa98131a2473e04311a45afb6508f7cf1836fa2b"}, + {file = "nodeenv-1.4.0-py2.py3-none-any.whl", hash = "sha256:4b0b77afa3ba9b54f4b6396e60b0c83f59eaeb2d63dc3cc7a70f7f4af96c82bc"}, ] packaging = [ - {file = "packaging-23.0-py3-none-any.whl", hash = "sha256:714ac14496c3e68c99c29b00845f7a2b85f3bb6f1078fd9f72fd20f0570002b2"}, - {file = "packaging-23.0.tar.gz", hash = "sha256:b6ad297f8907de0fa2fe1ccbd26fdaf387f5f47c7275fedf8cce89f99446cf97"}, + {file = "packaging-20.4-py2.py3-none-any.whl", hash = "sha256:998416ba6962ae7fbd6596850b80e17859a5753ba17c32284f67bfff33784181"}, + {file = "packaging-20.4.tar.gz", hash = "sha256:4357f74f47b9c12db93624a82154e9b120fa8293699949152b22065d556079f8"}, ] paramiko = [ - {file = "paramiko-3.0.0-py3-none-any.whl", hash = "sha256:6bef55b882c9d130f8015b9a26f4bd93f710e90fe7478b9dcc810304e79b3cd8"}, - {file = "paramiko-3.0.0.tar.gz", hash = "sha256:fedc9b1dd43bc1d45f67f1ceca10bc336605427a46dcdf8dec6bfea3edf57965"}, + {file = "paramiko-2.7.1-py2.py3-none-any.whl", hash = "sha256:9c980875fa4d2cb751604664e9a2d0f69096643f5be4db1b99599fe114a97b2f"}, + {file = "paramiko-2.7.1.tar.gz", hash = "sha256:920492895db8013f6cc0179293147f830b8c7b21fdfc839b6bad760c27459d9f"}, ] -pathlib2 = [ - {file = "pathlib2-2.3.7.post1-py2.py3-none-any.whl", hash = "sha256:5266a0fd000452f1b3467d782f079a4343c63aaa119221fbdc4e39577489ca5b"}, - {file = "pathlib2-2.3.7.post1.tar.gz", hash = "sha256:9fe0edad898b83c0c3e199c842b27ed216645d2e177757b2dd67384d4113c641"}, -] -pathspec = [ - {file = "pathspec-0.11.0-py3-none-any.whl", hash = "sha256:3a66eb970cbac598f9e5ccb5b2cf58930cd8e3ed86d393d541eaf2d8b1705229"}, - {file = "pathspec-0.11.0.tar.gz", hash = "sha256:64d338d4e0914e91c1792321e6907b5a593f1ab1851de7fc269557a21b30ebbc"}, -] -Pillow = [ - {file = "Pillow-9.4.0-1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b4b4e9dda4f4e4c4e6896f93e84a8f0bcca3b059de9ddf67dac3c334b1195e1"}, - {file = "Pillow-9.4.0-1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:fb5c1ad6bad98c57482236a21bf985ab0ef42bd51f7ad4e4538e89a997624e12"}, - {file = "Pillow-9.4.0-1-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:f0caf4a5dcf610d96c3bd32932bfac8aee61c96e60481c2a0ea58da435e25acd"}, - {file = "Pillow-9.4.0-1-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:3f4cc516e0b264c8d4ccd6b6cbc69a07c6d582d8337df79be1e15a5056b258c9"}, - {file = "Pillow-9.4.0-1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:b8c2f6eb0df979ee99433d8b3f6d193d9590f735cf12274c108bd954e30ca858"}, - {file = "Pillow-9.4.0-1-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b70756ec9417c34e097f987b4d8c510975216ad26ba6e57ccb53bc758f490dab"}, - {file = "Pillow-9.4.0-1-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:43521ce2c4b865d385e78579a082b6ad1166ebed2b1a2293c3be1d68dd7ca3b9"}, - {file = "Pillow-9.4.0-2-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:9d9a62576b68cd90f7075876f4e8444487db5eeea0e4df3ba298ee38a8d067b0"}, - {file = "Pillow-9.4.0-2-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:87708d78a14d56a990fbf4f9cb350b7d89ee8988705e58e39bdf4d82c149210f"}, - {file = "Pillow-9.4.0-2-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:8a2b5874d17e72dfb80d917213abd55d7e1ed2479f38f001f264f7ce7bae757c"}, - {file = "Pillow-9.4.0-2-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:83125753a60cfc8c412de5896d10a0a405e0bd88d0470ad82e0869ddf0cb3848"}, - {file = "Pillow-9.4.0-2-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:9e5f94742033898bfe84c93c831a6f552bb629448d4072dd312306bab3bd96f1"}, - {file = "Pillow-9.4.0-2-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:013016af6b3a12a2f40b704677f8b51f72cb007dac785a9933d5c86a72a7fe33"}, - {file = "Pillow-9.4.0-2-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:99d92d148dd03fd19d16175b6d355cc1b01faf80dae93c6c3eb4163709edc0a9"}, - {file = "Pillow-9.4.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:2968c58feca624bb6c8502f9564dd187d0e1389964898f5e9e1fbc8533169157"}, - {file = "Pillow-9.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c5c1362c14aee73f50143d74389b2c158707b4abce2cb055b7ad37ce60738d47"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd752c5ff1b4a870b7661234694f24b1d2b9076b8bf337321a814c612665f343"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9a3049a10261d7f2b6514d35bbb7a4dfc3ece4c4de14ef5876c4b7a23a0e566d"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16a8df99701f9095bea8a6c4b3197da105df6f74e6176c5b410bc2df2fd29a57"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:94cdff45173b1919350601f82d61365e792895e3c3a3443cf99819e6fbf717a5"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:ed3e4b4e1e6de75fdc16d3259098de7c6571b1a6cc863b1a49e7d3d53e036070"}, - {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d5b2f8a31bd43e0f18172d8ac82347c8f37ef3e0b414431157718aa234991b28"}, - {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:09b89ddc95c248ee788328528e6a2996e09eaccddeeb82a5356e92645733be35"}, - {file = "Pillow-9.4.0-cp310-cp310-win32.whl", hash = "sha256:f09598b416ba39a8f489c124447b007fe865f786a89dbfa48bb5cf395693132a"}, - {file = "Pillow-9.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6e78171be3fb7941f9910ea15b4b14ec27725865a73c15277bc39f5ca4f8391"}, - {file = "Pillow-9.4.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3fa1284762aacca6dc97474ee9c16f83990b8eeb6697f2ba17140d54b453e133"}, - {file = "Pillow-9.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:eaef5d2de3c7e9b21f1e762f289d17b726c2239a42b11e25446abf82b26ac132"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4dfdae195335abb4e89cc9762b2edc524f3c6e80d647a9a81bf81e17e3fb6f0"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6abfb51a82e919e3933eb137e17c4ae9c0475a25508ea88993bb59faf82f3b35"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:451f10ef963918e65b8869e17d67db5e2f4ab40e716ee6ce7129b0cde2876eab"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:6663977496d616b618b6cfa43ec86e479ee62b942e1da76a2c3daa1c75933ef4"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:60e7da3a3ad1812c128750fc1bc14a7ceeb8d29f77e0a2356a8fb2aa8925287d"}, - {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:19005a8e58b7c1796bc0167862b1f54a64d3b44ee5d48152b06bb861458bc0f8"}, - {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f715c32e774a60a337b2bb8ad9839b4abf75b267a0f18806f6f4f5f1688c4b5a"}, - {file = "Pillow-9.4.0-cp311-cp311-win32.whl", hash = "sha256:b222090c455d6d1a64e6b7bb5f4035c4dff479e22455c9eaa1bdd4c75b52c80c"}, - {file = "Pillow-9.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:ba6612b6548220ff5e9df85261bddc811a057b0b465a1226b39bfb8550616aee"}, - {file = "Pillow-9.4.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:5f532a2ad4d174eb73494e7397988e22bf427f91acc8e6ebf5bb10597b49c493"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5dd5a9c3091a0f414a963d427f920368e2b6a4c2f7527fdd82cde8ef0bc7a327"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef21af928e807f10bf4141cad4746eee692a0dd3ff56cfb25fce076ec3cc8abe"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:847b114580c5cc9ebaf216dd8c8dbc6b00a3b7ab0131e173d7120e6deade1f57"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:653d7fb2df65efefbcbf81ef5fe5e5be931f1ee4332c2893ca638c9b11a409c4"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:46f39cab8bbf4a384ba7cb0bc8bae7b7062b6a11cfac1ca4bc144dea90d4a9f5"}, - {file = "Pillow-9.4.0-cp37-cp37m-win32.whl", hash = "sha256:7ac7594397698f77bce84382929747130765f66406dc2cd8b4ab4da68ade4c6e"}, - {file = "Pillow-9.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:46c259e87199041583658457372a183636ae8cd56dbf3f0755e0f376a7f9d0e6"}, - {file = "Pillow-9.4.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:0e51f608da093e5d9038c592b5b575cadc12fd748af1479b5e858045fff955a9"}, - {file = "Pillow-9.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:765cb54c0b8724a7c12c55146ae4647e0274a839fb6de7bcba841e04298e1011"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:519e14e2c49fcf7616d6d2cfc5c70adae95682ae20f0395e9280db85e8d6c4df"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d197df5489004db87d90b918033edbeee0bd6df3848a204bca3ff0a903bef837"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0845adc64fe9886db00f5ab68c4a8cd933ab749a87747555cec1c95acea64b0b"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:e1339790c083c5a4de48f688b4841f18df839eb3c9584a770cbd818b33e26d5d"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:a96e6e23f2b79433390273eaf8cc94fec9c6370842e577ab10dabdcc7ea0a66b"}, - {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:7cfc287da09f9d2a7ec146ee4d72d6ea1342e770d975e49a8621bf54eaa8f30f"}, - {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d7081c084ceb58278dd3cf81f836bc818978c0ccc770cbbb202125ddabec6628"}, - {file = "Pillow-9.4.0-cp38-cp38-win32.whl", hash = "sha256:df41112ccce5d47770a0c13651479fbcd8793f34232a2dd9faeccb75eb5d0d0d"}, - {file = "Pillow-9.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:7a21222644ab69ddd9967cfe6f2bb420b460dae4289c9d40ff9a4896e7c35c9a"}, - {file = "Pillow-9.4.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0f3269304c1a7ce82f1759c12ce731ef9b6e95b6df829dccd9fe42912cc48569"}, - {file = "Pillow-9.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cb362e3b0976dc994857391b776ddaa8c13c28a16f80ac6522c23d5257156bed"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2e0f87144fcbbe54297cae708c5e7f9da21a4646523456b00cc956bd4c65815"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:28676836c7796805914b76b1837a40f76827ee0d5398f72f7dcc634bae7c6264"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0884ba7b515163a1a05440a138adeb722b8a6ae2c2b33aea93ea3118dd3a899e"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:53dcb50fbdc3fb2c55431a9b30caeb2f7027fcd2aeb501459464f0214200a503"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:e8c5cf126889a4de385c02a2c3d3aba4b00f70234bfddae82a5eaa3ee6d5e3e6"}, - {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:6c6b1389ed66cdd174d040105123a5a1bc91d0aa7059c7261d20e583b6d8cbd2"}, - {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0dd4c681b82214b36273c18ca7ee87065a50e013112eea7d78c7a1b89a739153"}, - {file = "Pillow-9.4.0-cp39-cp39-win32.whl", hash = "sha256:6d9dfb9959a3b0039ee06c1a1a90dc23bac3b430842dcb97908ddde05870601c"}, - {file = "Pillow-9.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:54614444887e0d3043557d9dbc697dbb16cfb5a35d672b7a0fcc1ed0cf1c600b"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b9b752ab91e78234941e44abdecc07f1f0d8f51fb62941d32995b8161f68cfe5"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d3b56206244dc8711f7e8b7d6cad4663917cd5b2d950799425076681e8766286"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aabdab8ec1e7ca7f1434d042bf8b1e92056245fb179790dc97ed040361f16bfd"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:db74f5562c09953b2c5f8ec4b7dfd3f5421f31811e97d1dbc0a7c93d6e3a24df"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e9d7747847c53a16a729b6ee5e737cf170f7a16611c143d95aa60a109a59c336"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b52ff4f4e002f828ea6483faf4c4e8deea8d743cf801b74910243c58acc6eda3"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:575d8912dca808edd9acd6f7795199332696d3469665ef26163cd090fa1f8bfa"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c3c4ed2ff6760e98d262e0cc9c9a7f7b8a9f61aa4d47c58835cdaf7b0b8811bb"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e621b0246192d3b9cb1dc62c78cfa4c6f6d2ddc0ec207d43c0dedecb914f152a"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:8f127e7b028900421cad64f51f75c051b628db17fb00e099eb148761eed598c9"}, - {file = "Pillow-9.4.0.tar.gz", hash = "sha256:a1c2d7780448eb93fbcc3789bf3916aa5720d942e37945f4056680317f1cd23e"}, -] -platformdirs = [ - {file = "platformdirs-3.0.0-py3-none-any.whl", hash = "sha256:b1d5eb14f221506f50d6604a561f4c5786d9e80355219694a1b244bcd96f4567"}, - {file = "platformdirs-3.0.0.tar.gz", hash = "sha256:8a1228abb1ef82d788f74139988b137e78692984ec7b08eaa6c65f1723af28f9"}, +pillow = [ + {file = "Pillow-7.1.2-cp35-cp35m-macosx_10_10_intel.whl", hash = "sha256:ae2b270f9a0b8822b98655cb3a59cdb1bd54a34807c6c56b76dd2e786c3b7db3"}, + {file = "Pillow-7.1.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:d23e2aa9b969cf9c26edfb4b56307792b8b374202810bd949effd1c6e11ebd6d"}, + {file = "Pillow-7.1.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:b532bcc2f008e96fd9241177ec580829dee817b090532f43e54074ecffdcd97f"}, + {file = "Pillow-7.1.2-cp35-cp35m-win32.whl", hash = "sha256:12e4bad6bddd8546a2f9771485c7e3d2b546b458ae8ff79621214119ac244523"}, + {file = "Pillow-7.1.2-cp35-cp35m-win_amd64.whl", hash = "sha256:9744350687459234867cbebfe9df8f35ef9e1538f3e729adbd8fde0761adb705"}, + {file = "Pillow-7.1.2-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:f54be399340aa602066adb63a86a6a5d4f395adfdd9da2b9a0162ea808c7b276"}, + {file = "Pillow-7.1.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:1f694e28c169655c50bb89a3fa07f3b854d71eb47f50783621de813979ba87f3"}, + {file = "Pillow-7.1.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f784aad988f12c80aacfa5b381ec21fd3f38f851720f652b9f33facc5101cf4d"}, + {file = "Pillow-7.1.2-cp36-cp36m-win32.whl", hash = "sha256:b37bb3bd35edf53125b0ff257822afa6962649995cbdfde2791ddb62b239f891"}, + {file = "Pillow-7.1.2-cp36-cp36m-win_amd64.whl", hash = "sha256:b67a6c47ed963c709ed24566daa3f95a18f07d3831334da570c71da53d97d088"}, + {file = "Pillow-7.1.2-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:eaa83729eab9c60884f362ada982d3a06beaa6cc8b084cf9f76cae7739481dfa"}, + {file = "Pillow-7.1.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:f46e0e024346e1474083c729d50de909974237c72daca05393ee32389dabe457"}, + {file = "Pillow-7.1.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:0e2a3bceb0fd4e0cb17192ae506d5f082b309ffe5fc370a5667959c9b2f85fa3"}, + {file = "Pillow-7.1.2-cp37-cp37m-win32.whl", hash = "sha256:ccc9ad2460eb5bee5642eaf75a0438d7f8887d484490d5117b98edd7f33118b7"}, + {file = "Pillow-7.1.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b943e71c2065ade6fef223358e56c167fc6ce31c50bc7a02dd5c17ee4338e8ac"}, + {file = "Pillow-7.1.2-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:04766c4930c174b46fd72d450674612ab44cca977ebbcc2dde722c6933290107"}, + {file = "Pillow-7.1.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:f455efb7a98557412dc6f8e463c1faf1f1911ec2432059fa3e582b6000fc90e2"}, + {file = "Pillow-7.1.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:ee94fce8d003ac9fd206496f2707efe9eadcb278d94c271f129ab36aa7181344"}, + {file = "Pillow-7.1.2-cp38-cp38-win32.whl", hash = "sha256:4b02b9c27fad2054932e89f39703646d0c543f21d3cc5b8e05434215121c28cd"}, + {file = "Pillow-7.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:3d25dd8d688f7318dca6d8cd4f962a360ee40346c15893ae3b95c061cdbc4079"}, + {file = "Pillow-7.1.2-pp373-pypy36_pp73-win32.whl", hash = "sha256:0f01e63c34f0e1e2580cc0b24e86a5ccbbfa8830909a52ee17624c4193224cd9"}, + {file = "Pillow-7.1.2-py3.8-macosx-10.9-x86_64.egg", hash = "sha256:70e3e0d99a0dcda66283a185f80697a9b08806963c6149c8e6c5f452b2aa59c0"}, + {file = "Pillow-7.1.2.tar.gz", hash = "sha256:a0b49960110bc6ff5fead46013bcb8825d101026d466f3a4de3476defe0fb0dd"}, ] pluggy = [ - {file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"}, - {file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"}, + {file = "pluggy-0.13.1-py2.py3-none-any.whl", hash = "sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"}, + {file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"}, ] pre-commit = [ {file = "pre_commit-2.1.1-py2.py3-none-any.whl", hash = "sha256:09ebe467f43ce24377f8c2f200fe3cd2570d328eb2ce0568c8e96ce19da45fa6"}, {file = "pre_commit-2.1.1.tar.gz", hash = "sha256:f8d555e31e2051892c7f7b3ad9f620bd2c09271d87e9eedb2ad831737d6211eb"}, ] protobuf = [ - {file = "protobuf-4.21.9-cp310-abi3-win32.whl", hash = "sha256:6e0be9f09bf9b6cf497b27425487706fa48c6d1632ddd94dab1a5fe11a422392"}, - {file = "protobuf-4.21.9-cp310-abi3-win_amd64.whl", hash = "sha256:a7d0ea43949d45b836234f4ebb5ba0b22e7432d065394b532cdca8f98415e3cf"}, - {file = "protobuf-4.21.9-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:b5ab0b8918c136345ff045d4b3d5f719b505b7c8af45092d7f45e304f55e50a1"}, - {file = "protobuf-4.21.9-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:2c9c2ed7466ad565f18668aa4731c535511c5d9a40c6da39524bccf43e441719"}, - {file = "protobuf-4.21.9-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:e575c57dc8b5b2b2caa436c16d44ef6981f2235eb7179bfc847557886376d740"}, - {file = "protobuf-4.21.9-cp37-cp37m-win32.whl", hash = "sha256:9227c14010acd9ae7702d6467b4625b6fe853175a6b150e539b21d2b2f2b409c"}, - {file = "protobuf-4.21.9-cp37-cp37m-win_amd64.whl", hash = "sha256:a419cc95fca8694804709b8c4f2326266d29659b126a93befe210f5bbc772536"}, - {file = "protobuf-4.21.9-cp38-cp38-win32.whl", hash = "sha256:5b0834e61fb38f34ba8840d7dcb2e5a2f03de0c714e0293b3963b79db26de8ce"}, - {file = "protobuf-4.21.9-cp38-cp38-win_amd64.whl", hash = "sha256:84ea107016244dfc1eecae7684f7ce13c788b9a644cd3fca5b77871366556444"}, - {file = "protobuf-4.21.9-cp39-cp39-win32.whl", hash = "sha256:f9eae277dd240ae19bb06ff4e2346e771252b0e619421965504bd1b1bba7c5fa"}, - {file = "protobuf-4.21.9-cp39-cp39-win_amd64.whl", hash = "sha256:6e312e280fbe3c74ea9e080d9e6080b636798b5e3939242298b591064470b06b"}, - {file = "protobuf-4.21.9-py2.py3-none-any.whl", hash = "sha256:7eb8f2cc41a34e9c956c256e3ac766cf4e1a4c9c925dc757a41a01be3e852965"}, - {file = "protobuf-4.21.9-py3-none-any.whl", hash = "sha256:48e2cd6b88c6ed3d5877a3ea40df79d08374088e89bedc32557348848dff250b"}, - {file = "protobuf-4.21.9.tar.gz", hash = "sha256:61f21493d96d2a77f9ca84fefa105872550ab5ef71d21c458eb80edcf4885a99"}, + {file = "protobuf-3.12.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:e1464a4a2cf12f58f662c8e6421772c07947266293fb701cb39cd9c1e183f63c"}, + {file = "protobuf-3.12.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:6f349adabf1c004aba53f7b4633459f8ca8a09654bf7e69b509c95a454755776"}, + {file = "protobuf-3.12.2-cp35-cp35m-macosx_10_9_intel.whl", hash = "sha256:be04fe14ceed7f8641e30f36077c1a654ff6f17d0c7a5283b699d057d150d82a"}, + {file = "protobuf-3.12.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f4b73736108a416c76c17a8a09bc73af3d91edaa26c682aaa460ef91a47168d3"}, + {file = "protobuf-3.12.2-cp35-cp35m-win32.whl", hash = "sha256:5524c7020eb1fb7319472cb75c4c3206ef18b34d6034d2ee420a60f99cddeb07"}, + {file = "protobuf-3.12.2-cp35-cp35m-win_amd64.whl", hash = "sha256:bff02030bab8b969f4de597543e55bd05e968567acb25c0a87495a31eb09e925"}, + {file = "protobuf-3.12.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c9ca9f76805e5a637605f171f6c4772fc4a81eced4e2f708f79c75166a2c99ea"}, + {file = "protobuf-3.12.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:304e08440c4a41a0f3592d2a38934aad6919d692bb0edfb355548786728f9a5e"}, + {file = "protobuf-3.12.2-cp36-cp36m-win32.whl", hash = "sha256:b5a114ea9b7fc90c2cc4867a866512672a47f66b154c6d7ee7e48ddb68b68122"}, + {file = "protobuf-3.12.2-cp36-cp36m-win_amd64.whl", hash = "sha256:85b94d2653b0fdf6d879e39d51018bf5ccd86c81c04e18a98e9888694b98226f"}, + {file = "protobuf-3.12.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a7ab28a8f1f043c58d157bceb64f80e4d2f7f1b934bc7ff5e7f7a55a337ea8b0"}, + {file = "protobuf-3.12.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:eafe9fa19fcefef424ee089fb01ac7177ff3691af7cc2ae8791ae523eb6ca907"}, + {file = "protobuf-3.12.2-cp37-cp37m-win32.whl", hash = "sha256:612bc97e42b22af10ba25e4140963fbaa4c5181487d163f4eb55b0b15b3dfcd2"}, + {file = "protobuf-3.12.2-cp37-cp37m-win_amd64.whl", hash = "sha256:e72736dd822748b0721f41f9aaaf6a5b6d5cfc78f6c8690263aef8bba4457f0e"}, + {file = "protobuf-3.12.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:87535dc2d2ef007b9d44e309d2b8ea27a03d2fa09556a72364d706fcb7090828"}, + {file = "protobuf-3.12.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:50b5fee674878b14baea73b4568dc478c46a31dd50157a5b5d2f71138243b1a9"}, + {file = "protobuf-3.12.2-py2.py3-none-any.whl", hash = "sha256:a96f8fc625e9ff568838e556f6f6ae8eca8b4837cdfb3f90efcb7c00e342a2eb"}, + {file = "protobuf-3.12.2.tar.gz", hash = "sha256:49ef8ab4c27812a89a76fa894fe7a08f42f2147078392c0dee51d4a444ef6df5"}, ] py = [ - {file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"}, - {file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"}, + {file = "py-1.9.0-py2.py3-none-any.whl", hash = "sha256:366389d1db726cd2fcfc79732e75410e5fe4d31db13692115529d34069a043c2"}, + {file = "py-1.9.0.tar.gz", hash = "sha256:9ca6883ce56b4e8da7e79ac18787889fa5206c79dcc67fb065376cd2fe03f342"}, ] pycodestyle = [ {file = "pycodestyle-2.6.0-py2.py3-none-any.whl", hash = "sha256:2295e7b2f6b5bd100585ebcb1f616591b652db8a741695b3d8f5d28bdc934367"}, {file = "pycodestyle-2.6.0.tar.gz", hash = "sha256:c58a7d2815e0e8d7972bf1803331fb0152f867bd89adf8a01dfd55085434192e"}, ] pycparser = [ - {file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"}, - {file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"}, + {file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"}, + {file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"}, ] pyflakes = [ {file = "pyflakes-2.2.0-py2.py3-none-any.whl", hash = "sha256:0d94e0e05a19e57a99444b6ddcf9a6eb2e5c68d3ca1e98e90707af8152c90a92"}, {file = "pyflakes-2.2.0.tar.gz", hash = "sha256:35b2d75ee967ea93b55750aa9edbbf72813e06a66ba54438df2cfac9e3c27fc8"}, ] -PyNaCl = [ - {file = "PyNaCl-1.5.0-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:401002a4aaa07c9414132aaed7f6836ff98f59277a234704ff66878c2ee4a0d1"}, - {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:52cb72a79269189d4e0dc537556f4740f7f0a9ec41c1322598799b0bdad4ef92"}, - {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a36d4a9dda1f19ce6e03c9a784a2921a4b726b02e1c736600ca9c22029474394"}, - {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0c84947a22519e013607c9be43706dd42513f9e6ae5d39d3613ca1e142fba44d"}, - {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06b8f6fa7f5de8d5d2f7573fe8c863c051225a27b61e6860fd047b1775807858"}, - {file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:a422368fc821589c228f4c49438a368831cb5bbc0eab5ebe1d7fac9dded6567b"}, - {file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:61f642bf2378713e2c2e1de73444a3778e5f0a38be6fee0fe532fe30060282ff"}, - {file = "PyNaCl-1.5.0-cp36-abi3-win32.whl", hash = "sha256:e46dae94e34b085175f8abb3b0aaa7da40767865ac82c928eeb9e57e1ea8a543"}, - {file = "PyNaCl-1.5.0-cp36-abi3-win_amd64.whl", hash = "sha256:20f42270d27e1b6a29f54032090b972d97f0a1b0948cc52392041ef7831fee93"}, - {file = "PyNaCl-1.5.0.tar.gz", hash = "sha256:8ac7448f09ab85811607bdd21ec2464495ac8b7c66d146bf545b0f08fb9220ba"}, +pynacl = [ + {file = "PyNaCl-1.4.0-cp27-cp27m-macosx_10_10_x86_64.whl", hash = "sha256:ea6841bc3a76fa4942ce00f3bda7d436fda21e2d91602b9e21b7ca9ecab8f3ff"}, + {file = "PyNaCl-1.4.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:d452a6746f0a7e11121e64625109bc4468fc3100452817001dbe018bb8b08514"}, + {file = "PyNaCl-1.4.0-cp27-cp27m-win32.whl", hash = "sha256:2fe0fc5a2480361dcaf4e6e7cea00e078fcda07ba45f811b167e3f99e8cff574"}, + {file = "PyNaCl-1.4.0-cp27-cp27m-win_amd64.whl", hash = "sha256:f8851ab9041756003119368c1e6cd0b9c631f46d686b3904b18c0139f4419f80"}, + {file = "PyNaCl-1.4.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:7757ae33dae81c300487591c68790dfb5145c7d03324000433d9a2c141f82af7"}, + {file = "PyNaCl-1.4.0-cp35-abi3-macosx_10_10_x86_64.whl", hash = "sha256:757250ddb3bff1eecd7e41e65f7f833a8405fede0194319f87899690624f2122"}, + {file = "PyNaCl-1.4.0-cp35-abi3-manylinux1_x86_64.whl", hash = "sha256:30f9b96db44e09b3304f9ea95079b1b7316b2b4f3744fe3aaecccd95d547063d"}, + {file = "PyNaCl-1.4.0-cp35-cp35m-win32.whl", hash = "sha256:06cbb4d9b2c4bd3c8dc0d267416aaed79906e7b33f114ddbf0911969794b1cc4"}, + {file = "PyNaCl-1.4.0-cp35-cp35m-win_amd64.whl", hash = "sha256:511d269ee845037b95c9781aa702f90ccc36036f95d0f31373a6a79bd8242e25"}, + {file = "PyNaCl-1.4.0-cp36-cp36m-win32.whl", hash = "sha256:11335f09060af52c97137d4ac54285bcb7df0cef29014a1a4efe64ac065434c4"}, + {file = "PyNaCl-1.4.0-cp36-cp36m-win_amd64.whl", hash = "sha256:cd401ccbc2a249a47a3a1724c2918fcd04be1f7b54eb2a5a71ff915db0ac51c6"}, + {file = "PyNaCl-1.4.0-cp37-cp37m-win32.whl", hash = "sha256:8122ba5f2a2169ca5da936b2e5a511740ffb73979381b4229d9188f6dcb22f1f"}, + {file = "PyNaCl-1.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:537a7ccbea22905a0ab36ea58577b39d1fa9b1884869d173b5cf111f006f689f"}, + {file = "PyNaCl-1.4.0-cp38-cp38-win32.whl", hash = "sha256:9c4a7ea4fb81536c1b1f5cc44d54a296f96ae78c1ebd2311bd0b60be45a48d96"}, + {file = "PyNaCl-1.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:7c6092102219f59ff29788860ccb021e80fffd953920c4a8653889c029b2d420"}, + {file = "PyNaCl-1.4.0.tar.gz", hash = "sha256:54e9a2c849c742006516ad56a88f5c74bf2ce92c9f67435187c3c5953b346505"}, +] +pyparsing = [ + {file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"}, + {file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"}, ] pyproj = [ - {file = "pyproj-3.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:473961faef7a9fd723c5d432f65220ea6ab3854e606bf84b4d409a75a4261c78"}, - {file = "pyproj-3.3.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07c9d8d7ec009bbac09e233cfc725601586fe06880e5538a3a44eaf560ba3a62"}, - {file = "pyproj-3.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2fef9c1e339f25c57f6ae0558b5ab1bbdf7994529a30d8d7504fc6302ea51c03"}, - {file = "pyproj-3.3.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:140fa649fedd04f680a39f8ad339799a55cb1c49f6a84e1b32b97e49646647aa"}, - {file = "pyproj-3.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b59c08aea13ee428cf8a919212d55c036cc94784805ed77c8f31a4d1f541058c"}, - {file = "pyproj-3.3.1-cp310-cp310-win32.whl", hash = "sha256:1adc9ccd1bf04998493b6a2e87e60656c75ab790653b36cfe351e9ef214828ed"}, - {file = "pyproj-3.3.1-cp310-cp310-win_amd64.whl", hash = "sha256:42eea10afc750fccd1c5c4ba56de29ab791ab4d83c1f7db72705566282ac5396"}, - {file = "pyproj-3.3.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:531ea36519fa7b581466d4b6ab32f66ae4dadd9499d726352f71ee5e19c3d1c5"}, - {file = "pyproj-3.3.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67025e37598a6bbed2c9c6c9e4c911f6dd39315d3e1148ead935a5c4d64309d5"}, - {file = "pyproj-3.3.1-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aed1a3c0cd4182425f91b48d5db39f459bc2fe0d88017ead6425a1bc85faee33"}, - {file = "pyproj-3.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3cc4771403db54494e1e55bca8e6d33cde322f8cf0ed39f1557ff109c66d2cd1"}, - {file = "pyproj-3.3.1-cp38-cp38-win32.whl", hash = "sha256:c99f7b5757a28040a2dd4a28c9805fdf13eef79a796f4a566ab5cb362d10630d"}, - {file = "pyproj-3.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:5dac03d4338a4c8bd0f69144c527474f517b4cbd7d2d8c532cd8937799723248"}, - {file = "pyproj-3.3.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:56b0f9ee2c5b2520b18db30a393a7b86130cf527ddbb8c96e7f3c837474a9d79"}, - {file = "pyproj-3.3.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f1032e5dfb50eae06382bcc7b9011b994f7104d932fe91bd83a722275e30e8ce"}, - {file = "pyproj-3.3.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f92d8f6514516124abb714dce912b20867831162cfff9fae2678ef07b6fcf0f"}, - {file = "pyproj-3.3.1-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1ef1bfbe2dcc558c7a98e2f1836abdcd630390f3160724a6f4f5c818b2be0ad5"}, - {file = "pyproj-3.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ca5f32b56210429b367ca4f9a57ffe67975c487af82e179a24370879a3daf68"}, - {file = "pyproj-3.3.1-cp39-cp39-win32.whl", hash = "sha256:aba199704c824fb84ab64927e7bc9ef71e603e483130ec0f7e09e97259b8f61f"}, - {file = "pyproj-3.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:120d45ed73144c65e9677dc73ba8a531c495d179dd9f9f0471ac5acc02d7ac4b"}, - {file = "pyproj-3.3.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:52efb681647dfac185cc655a709bc0caaf910031a0390f816f5fc8ce150cbedc"}, - {file = "pyproj-3.3.1-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ab0d6e38fda7c13726afacaf62e9f9dd858089d67910471758afd9cb24e0ecd"}, - {file = "pyproj-3.3.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45487942c19c5a8b09c91964ea3201f4e094518e34743cae373889a36e3d9260"}, - {file = "pyproj-3.3.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:797ad5655d484feac14b0fbb4a4efeaac0cf780a223046e2465494c767fd1c3b"}, - {file = "pyproj-3.3.1.tar.gz", hash = "sha256:b3d8e14d91cc95fb3dbc03a9d0588ac58326803eefa5bbb0978d109de3304fbe"}, + {file = "pyproj-2.6.1.post1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:457ad3856014ac26af1d86def6dc8cf69c1fa377b6e2fd6e97912d51cf66bdbe"}, + {file = "pyproj-2.6.1.post1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:6f3f36440ea61f5f6da4e6beb365dddcbe159815450001d9fb753545affa45ff"}, + {file = "pyproj-2.6.1.post1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6a212d0e5c7efa33d039f0c8b0a489e2204fcd28b56206567852ad7f5f2a653e"}, + {file = "pyproj-2.6.1.post1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:451a3d1c563b672458029ebc04acbb3266cd8b3025268eb871a9176dc3638911"}, + {file = "pyproj-2.6.1.post1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e015f900b4b84e908f8035ab16ebf02d67389c1c216c17a2196fc2e515c00762"}, + {file = "pyproj-2.6.1.post1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:a13e5731b3a360ee7fbd1e9199ec9203fafcece8ebd0b1351f16d0a90cad6828"}, + {file = "pyproj-2.6.1.post1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:33c1c2968a4f4f87d517c4275a18b557e5c13907cf2609371fadea8463c3ba05"}, + {file = "pyproj-2.6.1.post1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:3fef83a01c1e86dd9fa99d8214f749837cfafc34d9d6230b4b0a998fa7a68a1a"}, + {file = "pyproj-2.6.1.post1-cp36-cp36m-win32.whl", hash = "sha256:a6ac4861979cd05a0f5400fefa41d26c0269a5fb8237618aef7c998907db39e1"}, + {file = "pyproj-2.6.1.post1-cp36-cp36m-win_amd64.whl", hash = "sha256:cbf6ccf990860b06c5262ff97c4b78e1d07883981635cd53a6aa438a68d92945"}, + {file = "pyproj-2.6.1.post1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:adacb67a9f71fb54ca1b887a6ab20f32dd536fcdf2acec84a19e25ad768f7965"}, + {file = "pyproj-2.6.1.post1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:e50d5d20b87758acf8f13f39a3b3eb21d5ef32339d2bc8cdeb8092416e0051df"}, + {file = "pyproj-2.6.1.post1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:2518d1606e2229b82318e704b40290e02a2a52d77b40cdcb2978973d6fc27b20"}, + {file = "pyproj-2.6.1.post1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:33a5d1cfbb40a019422eb80709a0e270704390ecde7278fdc0b88f3647c56a39"}, + {file = "pyproj-2.6.1.post1-cp37-cp37m-win32.whl", hash = "sha256:daf2998e3f5bcdd579a18faf009f37f53538e9b7d0a252581a610297d31e8536"}, + {file = "pyproj-2.6.1.post1-cp37-cp37m-win_amd64.whl", hash = "sha256:a8b7c8accdc61dac8e91acab7c1f7b4590d1e102f2ee9b1f1e6399fad225958e"}, + {file = "pyproj-2.6.1.post1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9f097e8f341a162438918e908be86d105a28194ff6224633b2e9616c5031153f"}, + {file = "pyproj-2.6.1.post1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:d90a5d1fdd066b0e9b22409b0f5e81933469918fa04c2cf7f9a76ce84cb29dad"}, + {file = "pyproj-2.6.1.post1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:f5a8015c74ec8f6508aebf493b58ba20ccb4da8168bf05f0c2a37faccb518da9"}, + {file = "pyproj-2.6.1.post1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:d87836be6b720fb4d9c112136aa47621b6ca09a554e645c1081561eb8e2fa1f4"}, + {file = "pyproj-2.6.1.post1-cp38-cp38-win32.whl", hash = "sha256:bc2f3a15d065e206d63edd2cc4739aa0a35c05338ee276ab1dc72f56f1944bda"}, + {file = "pyproj-2.6.1.post1-cp38-cp38-win_amd64.whl", hash = "sha256:93cbad7b699e8e80def7de80c350617f35e6a0b82862f8ce3c014657c25fdb3c"}, + {file = "pyproj-2.6.1.post1.tar.gz", hash = "sha256:4f5b02b4abbd41610397c635b275a8ee4a2b5bc72a75572b98ac6ae7befa471e"}, ] pytest = [ - {file = "pytest-6.2.5-py3-none-any.whl", hash = "sha256:7310f8d27bc79ced999e760ca304d69f6ba6c6649c0b60fb0e04a4a77cacc134"}, - {file = "pytest-6.2.5.tar.gz", hash = "sha256:131b36680866a76e6781d13f101efb86cf674ebb9762eb70d3082b6f29889e89"}, + {file = "pytest-5.4.3-py3-none-any.whl", hash = "sha256:5c0db86b698e8f170ba4582a492248919255fcd4c79b1ee64ace34301fb589a1"}, + {file = "pytest-5.4.3.tar.gz", hash = "sha256:7979331bfcba207414f5e1263b5a0f8f521d0f457318836a7355531ed1a4c7d8"}, ] -PyYAML = [ - {file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"}, - {file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"}, - {file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"}, - {file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"}, - {file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"}, - {file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"}, - {file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"}, - {file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"}, - {file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"}, - {file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"}, - {file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"}, - {file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"}, - {file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"}, - {file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"}, - {file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"}, - {file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"}, - {file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"}, - {file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"}, - {file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"}, - {file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"}, - {file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"}, - {file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"}, - {file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"}, - {file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"}, - {file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"}, - {file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"}, - {file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"}, - {file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"}, -] -setuptools = [ - {file = "setuptools-67.4.0-py3-none-any.whl", hash = "sha256:f106dee1b506dee5102cc3f3e9e68137bbad6d47b616be7991714b0c62204251"}, - {file = "setuptools-67.4.0.tar.gz", hash = "sha256:e5fd0a713141a4a105412233c63dc4e17ba0090c8e8334594ac790ec97792330"}, +pyyaml = [ + {file = "PyYAML-5.3.1-cp27-cp27m-win32.whl", hash = "sha256:74809a57b329d6cc0fdccee6318f44b9b8649961fa73144a98735b0aaf029f1f"}, + {file = "PyYAML-5.3.1-cp27-cp27m-win_amd64.whl", hash = "sha256:240097ff019d7c70a4922b6869d8a86407758333f02203e0fc6ff79c5dcede76"}, + {file = "PyYAML-5.3.1-cp35-cp35m-win32.whl", hash = "sha256:4f4b913ca1a7319b33cfb1369e91e50354d6f07a135f3b901aca02aa95940bd2"}, + {file = "PyYAML-5.3.1-cp35-cp35m-win_amd64.whl", hash = "sha256:cc8955cfbfc7a115fa81d85284ee61147059a753344bc51098f3ccd69b0d7e0c"}, + {file = "PyYAML-5.3.1-cp36-cp36m-win32.whl", hash = "sha256:7739fc0fa8205b3ee8808aea45e968bc90082c10aef6ea95e855e10abf4a37b2"}, + {file = "PyYAML-5.3.1-cp36-cp36m-win_amd64.whl", hash = "sha256:69f00dca373f240f842b2931fb2c7e14ddbacd1397d57157a9b005a6a9942648"}, + {file = "PyYAML-5.3.1-cp37-cp37m-win32.whl", hash = "sha256:d13155f591e6fcc1ec3b30685d50bf0711574e2c0dfffd7644babf8b5102ca1a"}, + {file = "PyYAML-5.3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:73f099454b799e05e5ab51423c7bcf361c58d3206fa7b0d555426b1f4d9a3eaf"}, + {file = "PyYAML-5.3.1-cp38-cp38-win32.whl", hash = "sha256:06a0d7ba600ce0b2d2fe2e78453a470b5a6e000a985dd4a4e54e436cc36b0e97"}, + {file = "PyYAML-5.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:95f71d2af0ff4227885f7a6605c37fd53d3a106fcab511b8860ecca9fcf400ee"}, + {file = "PyYAML-5.3.1.tar.gz", hash = "sha256:b8eac752c5e14d3eca0e6dd9199cd627518cb5ec06add0de9d32baeee6fe645d"}, ] six = [ - {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, - {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, + {file = "six-1.15.0-py2.py3-none-any.whl", hash = "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced"}, + {file = "six-1.15.0.tar.gz", hash = "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259"}, ] toml = [ - {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, - {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, -] -tomli = [ - {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"}, - {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"}, -] -typing-extensions = [ - {file = "typing_extensions-4.5.0-py3-none-any.whl", hash = "sha256:fb33085c39dd998ac16d1431ebc293a8b3eedd00fd4a32de0ff79002c19511b4"}, - {file = "typing_extensions-4.5.0.tar.gz", hash = "sha256:5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb"}, + {file = "toml-0.10.1-py2.py3-none-any.whl", hash = "sha256:bda89d5935c2eac546d648028b9901107a595863cb36bae0c73ac804a9b4ce88"}, + {file = "toml-0.10.1.tar.gz", hash = "sha256:926b612be1e5ce0634a2ca03470f95169cf16f939018233a670519cb4ac58b0f"}, ] virtualenv = [ - {file = "virtualenv-20.19.0-py3-none-any.whl", hash = "sha256:54eb59e7352b573aa04d53f80fc9736ed0ad5143af445a1e539aada6eb947dd1"}, - {file = "virtualenv-20.19.0.tar.gz", hash = "sha256:37a640ba82ed40b226599c522d411e4be5edb339a0c0de030c0dc7b646d61590"}, + {file = "virtualenv-20.0.31-py2.py3-none-any.whl", hash = "sha256:e0305af10299a7fb0d69393d8f04cb2965dda9351140d11ac8db4e5e3970451b"}, + {file = "virtualenv-20.0.31.tar.gz", hash = "sha256:43add625c53c596d38f971a465553f6318decc39d98512bc100fa1b1e839c8dc"}, +] +wcwidth = [ + {file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"}, + {file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"}, +] +zipp = [ + {file = "zipp-3.1.0-py3-none-any.whl", hash = "sha256:aa36550ff0c0b7ef7fa639055d797116ee891440eac1a56f378e2d3179e0320b"}, + {file = "zipp-3.1.0.tar.gz", hash = "sha256:c599e4d75c98f6798c509911d08a22e6c021d074469042177c8c86fb92eefd96"}, ] diff --git a/daemon/proto/core/api/grpc/configservices.proto b/daemon/proto/core/api/grpc/configservices.proto index 25be616d..f1272df8 100644 --- a/daemon/proto/core/api/grpc/configservices.proto +++ b/daemon/proto/core/api/grpc/configservices.proto @@ -39,10 +39,16 @@ message ConfigMode { map config = 2; } +message GetConfigServicesRequest { + int32 session_id = 1; +} + +message GetConfigServicesResponse { + repeated ConfigService services = 1; +} + message GetConfigServiceDefaultsRequest { string name = 1; - int32 session_id = 2; - int32 node_id = 3; } message GetConfigServiceDefaultsResponse { @@ -51,6 +57,14 @@ message GetConfigServiceDefaultsResponse { repeated ConfigMode modes = 3; } +message GetNodeConfigServiceConfigsRequest { + int32 session_id = 1; +} + +message GetNodeConfigServiceConfigsResponse { + repeated ConfigServiceConfig configs = 1; +} + message GetNodeConfigServiceRequest { int32 session_id = 1; int32 node_id = 2; @@ -61,12 +75,22 @@ message GetNodeConfigServiceResponse { map config = 1; } -message GetConfigServiceRenderedRequest { +message GetNodeConfigServicesRequest { + int32 session_id = 1; + int32 node_id = 2; +} + +message GetNodeConfigServicesResponse { + repeated string services = 1; +} + +message SetNodeConfigServiceRequest { int32 session_id = 1; int32 node_id = 2; string name = 3; + map config = 4; } -message GetConfigServiceRenderedResponse { - map rendered = 1; +message SetNodeConfigServiceResponse { + bool result = 1; } diff --git a/daemon/proto/core/api/grpc/core.proto b/daemon/proto/core/api/grpc/core.proto index 09f2c764..d168afe0 100644 --- a/daemon/proto/core/api/grpc/core.proto +++ b/daemon/proto/core/api/grpc/core.proto @@ -25,6 +25,24 @@ service CoreApi { } rpc CheckSession (CheckSessionRequest) returns (CheckSessionResponse) { } + rpc GetSessionOptions (GetSessionOptionsRequest) returns (GetSessionOptionsResponse) { + } + rpc SetSessionOptions (SetSessionOptionsRequest) returns (SetSessionOptionsResponse) { + } + rpc SetSessionMetadata (SetSessionMetadataRequest) returns (SetSessionMetadataResponse) { + } + rpc GetSessionMetadata (GetSessionMetadataRequest) returns (GetSessionMetadataResponse) { + } + rpc GetSessionLocation (GetSessionLocationRequest) returns (GetSessionLocationResponse) { + } + rpc SetSessionLocation (SetSessionLocationRequest) returns (SetSessionLocationResponse) { + } + rpc SetSessionState (SetSessionStateRequest) returns (SetSessionStateResponse) { + } + rpc SetSessionUser (SetSessionUserRequest) returns (SetSessionUserResponse) { + } + rpc AddSessionServer (AddSessionServerRequest) returns (AddSessionServerResponse) { + } rpc SessionAlert (SessionAlertRequest) returns (SessionAlertResponse) { } @@ -49,22 +67,28 @@ service CoreApi { } rpc GetNodeTerminal (GetNodeTerminalRequest) returns (GetNodeTerminalResponse) { } - rpc MoveNode (MoveNodeRequest) returns (MoveNodeResponse) { - } rpc MoveNodes (stream MoveNodesRequest) returns (MoveNodesResponse) { } // link rpc + rpc GetNodeLinks (GetNodeLinksRequest) returns (GetNodeLinksResponse) { + } rpc AddLink (AddLinkRequest) returns (AddLinkResponse) { } rpc EditLink (EditLinkRequest) returns (EditLinkResponse) { } rpc DeleteLink (DeleteLinkRequest) returns (DeleteLinkResponse) { } - rpc Linked (LinkedRequest) returns (LinkedResponse) { + + // hook rpc + rpc GetHooks (GetHooksRequest) returns (GetHooksResponse) { + } + rpc AddHook (AddHookRequest) returns (AddHookResponse) { } // mobility rpc + rpc GetMobilityConfigs (mobility.GetMobilityConfigsRequest) returns (mobility.GetMobilityConfigsResponse) { + } rpc GetMobilityConfig (mobility.GetMobilityConfigRequest) returns (mobility.GetMobilityConfigResponse) { } rpc SetMobilityConfig (mobility.SetMobilityConfigRequest) returns (mobility.SetMobilityConfigResponse) { @@ -73,28 +97,42 @@ service CoreApi { } // service rpc + rpc GetServices (services.GetServicesRequest) returns (services.GetServicesResponse) { + } rpc GetServiceDefaults (services.GetServiceDefaultsRequest) returns (services.GetServiceDefaultsResponse) { } rpc SetServiceDefaults (services.SetServiceDefaultsRequest) returns (services.SetServiceDefaultsResponse) { } + rpc GetNodeServiceConfigs (services.GetNodeServiceConfigsRequest) returns (services.GetNodeServiceConfigsResponse) { + } rpc GetNodeService (services.GetNodeServiceRequest) returns (services.GetNodeServiceResponse) { } rpc GetNodeServiceFile (services.GetNodeServiceFileRequest) returns (services.GetNodeServiceFileResponse) { } + rpc SetNodeService (services.SetNodeServiceRequest) returns (services.SetNodeServiceResponse) { + } + rpc SetNodeServiceFile (services.SetNodeServiceFileRequest) returns (services.SetNodeServiceFileResponse) { + } rpc ServiceAction (services.ServiceActionRequest) returns (services.ServiceActionResponse) { } // config services + rpc GetConfigServices (configservices.GetConfigServicesRequest) returns (configservices.GetConfigServicesResponse) { + } rpc GetConfigServiceDefaults (configservices.GetConfigServiceDefaultsRequest) returns (configservices.GetConfigServiceDefaultsResponse) { } + rpc GetNodeConfigServiceConfigs (configservices.GetNodeConfigServiceConfigsRequest) returns (configservices.GetNodeConfigServiceConfigsResponse) { + } rpc GetNodeConfigService (configservices.GetNodeConfigServiceRequest) returns (configservices.GetNodeConfigServiceResponse) { } - rpc ConfigServiceAction (services.ServiceActionRequest) returns (services.ServiceActionResponse) { + rpc GetNodeConfigServices (configservices.GetNodeConfigServicesRequest) returns (configservices.GetNodeConfigServicesResponse) { } - rpc GetConfigServiceRendered (configservices.GetConfigServiceRenderedRequest) returns (configservices.GetConfigServiceRenderedResponse) { + rpc SetNodeConfigService (configservices.SetNodeConfigServiceRequest) returns (configservices.SetNodeConfigServiceResponse) { } // wlan rpc + rpc GetWlanConfigs (wlan.GetWlanConfigsRequest) returns (wlan.GetWlanConfigsResponse) { + } rpc GetWlanConfig (wlan.GetWlanConfigRequest) returns (wlan.GetWlanConfigResponse) { } rpc SetWlanConfig (wlan.SetWlanConfigRequest) returns (wlan.SetWlanConfigResponse) { @@ -102,25 +140,23 @@ service CoreApi { rpc WlanLink (wlan.WlanLinkRequest) returns (wlan.WlanLinkResponse) { } - // wireless rpc - rpc WirelessLinked (WirelessLinkedRequest) returns (WirelessLinkedResponse) { - } - rpc WirelessConfig (WirelessConfigRequest) returns (WirelessConfigResponse) { - } - rpc GetWirelessConfig (GetWirelessConfigRequest) returns (GetWirelessConfigResponse) { - } - // emane rpc + rpc GetEmaneConfig (emane.GetEmaneConfigRequest) returns (emane.GetEmaneConfigResponse) { + } + rpc SetEmaneConfig (emane.SetEmaneConfigRequest) returns (emane.SetEmaneConfigResponse) { + } + rpc GetEmaneModels (emane.GetEmaneModelsRequest) returns (emane.GetEmaneModelsResponse) { + } rpc GetEmaneModelConfig (emane.GetEmaneModelConfigRequest) returns (emane.GetEmaneModelConfigResponse) { } rpc SetEmaneModelConfig (emane.SetEmaneModelConfigRequest) returns (emane.SetEmaneModelConfigResponse) { } + rpc GetEmaneModelConfigs (emane.GetEmaneModelConfigsRequest) returns (emane.GetEmaneModelConfigsResponse) { + } rpc GetEmaneEventChannel (emane.GetEmaneEventChannelRequest) returns (emane.GetEmaneEventChannelResponse) { } rpc EmanePathlosses (stream emane.EmanePathlossesRequest) returns (emane.EmanePathlossesResponse) { } - rpc EmaneLink (emane.EmaneLinkRequest) returns (emane.EmaneLinkResponse) { - } // xml rpc rpc SaveXml (SaveXmlRequest) returns (SaveXmlResponse) { @@ -131,28 +167,27 @@ service CoreApi { // utilities rpc GetInterfaces (GetInterfacesRequest) returns (GetInterfacesResponse) { } - rpc ExecuteScript (ExecuteScriptRequest) returns (ExecuteScriptResponse) { + rpc EmaneLink (emane.EmaneLinkRequest) returns (emane.EmaneLinkResponse) { } - - // globals - rpc GetConfig (GetConfigRequest) returns (GetConfigResponse) { + rpc ExecuteScript (ExecuteScriptRequest) returns (ExecuteScriptResponse) { } } // rpc request/response messages -message GetConfigRequest { -} - -message GetConfigResponse { - repeated services.Service services = 1; - repeated configservices.ConfigService config_services = 2; - repeated string emane_models = 3; -} - - message StartSessionRequest { - Session session = 1; - bool definition = 2; + int32 session_id = 1; + repeated Node nodes = 2; + repeated Link links = 3; + repeated Hook hooks = 4; + SessionLocation location = 5; + map emane_config = 6; + repeated wlan.WlanConfig wlan_configs = 7; + repeated emane.EmaneModelConfig emane_model_configs = 8; + repeated mobility.MobilityConfig mobility_configs = 9; + repeated services.ServiceConfig service_configs = 10; + repeated services.ServiceFileConfig service_file_configs = 11; + repeated Link asymmetric_links = 12; + repeated configservices.ConfigServiceConfig config_service_configs = 13; } message StartSessionResponse { @@ -173,7 +208,8 @@ message CreateSessionRequest { } message CreateSessionResponse { - Session session = 1; + int32 session_id = 1; + SessionState.Enum state = 2; } message DeleteSessionRequest { @@ -207,6 +243,85 @@ message GetSessionResponse { Session session = 1; } +message GetSessionOptionsRequest { + int32 session_id = 1; +} + +message GetSessionOptionsResponse { + map config = 2; +} + +message SetSessionOptionsRequest { + int32 session_id = 1; + map config = 2; +} + +message SetSessionOptionsResponse { + bool result = 1; +} + +message SetSessionMetadataRequest { + int32 session_id = 1; + map config = 2; +} + +message SetSessionMetadataResponse { + bool result = 1; +} + +message GetSessionMetadataRequest { + int32 session_id = 1; +} + +message GetSessionMetadataResponse { + map config = 1; +} + +message GetSessionLocationRequest { + int32 session_id = 1; +} + +message GetSessionLocationResponse { + SessionLocation location = 1; +} + +message SetSessionLocationRequest { + int32 session_id = 1; + SessionLocation location = 2; +} + +message SetSessionLocationResponse { + bool result = 1; +} + +message SetSessionStateRequest { + int32 session_id = 1; + SessionState.Enum state = 2; +} + +message SetSessionStateResponse { + bool result = 1; +} + +message SetSessionUserRequest { + int32 session_id = 1; + string user = 2; +} + +message SetSessionUserResponse { + bool result = 1; +} + +message AddSessionServerRequest { + int32 session_id = 1; + string name = 2; + string host = 3; +} + +message AddSessionServerResponse { + bool result = 1; +} + message SessionAlertRequest { int32 session_id = 1; ExceptionLevel.Enum level = 2; @@ -292,11 +407,12 @@ message ConfigEvent { repeated int32 data_types = 5; string data_values = 6; string captions = 7; - string possible_values = 8; - string groups = 9; - int32 iface_id = 10; - int32 network_id = 11; - string opaque = 12; + string bitmap = 8; + string possible_values = 9; + string groups = 10; + int32 iface_id = 11; + int32 network_id = 12; + string opaque = 13; } message ExceptionEvent { @@ -338,14 +454,15 @@ message GetNodeRequest { message GetNodeResponse { Node node = 1; repeated Interface ifaces = 2; - repeated Link links = 3; } message EditNodeRequest { int32 session_id = 1; int32 node_id = 2; - string icon = 3; - string source = 4; + Position position = 3; + string icon = 4; + string source = 5; + Geo geo = 6; } message EditNodeResponse { @@ -371,21 +488,6 @@ message GetNodeTerminalResponse { string terminal = 1; } - -message MoveNodeRequest { - int32 session_id = 1; - int32 node_id = 2; - string source = 3; - oneof move_type { - Position position = 4; - Geo geo = 5; - } -} - -message MoveNodeResponse { - bool result = 1; -} - message MoveNodesRequest { int32 session_id = 1; int32 node_id = 2; @@ -412,6 +514,15 @@ message NodeCommandResponse { int32 return_code = 2; } +message GetNodeLinksRequest { + int32 session_id = 1; + int32 node_id = 2; +} + +message GetNodeLinksResponse { + repeated Link links = 1; +} + message AddLinkRequest { int32 session_id = 1; Link link = 2; @@ -451,6 +562,23 @@ message DeleteLinkResponse { bool result = 1; } +message GetHooksRequest { + int32 session_id = 1; +} + +message GetHooksResponse { + repeated Hook hooks = 1; +} + +message AddHookRequest { + int32 session_id = 1; + Hook hook = 2; +} + +message AddHookResponse { + bool result = 1; +} + message SaveXmlRequest { int32 session_id = 1; } @@ -479,7 +607,6 @@ message GetInterfacesResponse { message ExecuteScriptRequest { string script = 1; - string args = 2; } message ExecuteScriptResponse { @@ -545,8 +672,6 @@ message NodeType { CONTROL_NET = 13; DOCKER = 15; LXC = 16; - WIRELESS = 17; - PODMAN = 18; } } @@ -593,10 +718,15 @@ message Session { repeated services.ServiceDefaults default_services = 7; SessionLocation location = 8; repeated Hook hooks = 9; - map metadata = 10; - string file = 11; - map options = 12; - repeated Server servers = 13; + repeated string emane_models = 10; + map emane_config = 11; + repeated emane.GetEmaneModelConfig emane_model_configs = 12; + map wlan_configs = 13; + repeated services.NodeServiceConfig service_configs = 14; + repeated configservices.ConfigServiceConfig config_service_configs = 15; + map mobility_configs = 16; + map metadata = 17; + string file = 18; } message SessionSummary { @@ -622,13 +752,6 @@ message Node { Geo geo = 12; string dir = 13; string channel = 14; - int32 canvas = 15; - map wlan_config = 16; - map mobility_config = 17; - map service_configs = 18; - map config_service_configs= 19; - repeated emane.NodeEmaneConfig emane_configs = 20; - map wireless_config = 21; } message Link { @@ -670,8 +793,6 @@ message Interface { int32 mtu = 10; int32 node_id = 11; int32 net2_id = 12; - int32 nem_id = 13; - int32 nem_port = 14; } message SessionLocation { @@ -695,52 +816,3 @@ message Geo { float lon = 2; float alt = 3; } - -message Server { - string name = 1; - string host = 2; -} - -message LinkedRequest { - int32 session_id = 1; - int32 node1_id = 2; - int32 node2_id = 3; - int32 iface1_id = 4; - int32 iface2_id = 5; - bool linked = 6; -} - -message LinkedResponse { -} - -message WirelessLinkedRequest { - int32 session_id = 1; - int32 wireless_id = 2; - int32 node1_id = 3; - int32 node2_id = 4; - bool linked = 5; -} - -message WirelessLinkedResponse { -} - -message WirelessConfigRequest { - int32 session_id = 1; - int32 wireless_id = 2; - int32 node1_id = 3; - int32 node2_id = 4; - LinkOptions options1 = 5; - LinkOptions options2 = 6; -} - -message WirelessConfigResponse { -} - -message GetWirelessConfigRequest { - int32 session_id = 1; - int32 node_id = 2; -} - -message GetWirelessConfigResponse { - map config = 1; -} diff --git a/daemon/proto/core/api/grpc/emane.proto b/daemon/proto/core/api/grpc/emane.proto index b8579917..ad6a22ca 100644 --- a/daemon/proto/core/api/grpc/emane.proto +++ b/daemon/proto/core/api/grpc/emane.proto @@ -4,6 +4,31 @@ package emane; import "core/api/grpc/common.proto"; +message GetEmaneConfigRequest { + int32 session_id = 1; +} + +message GetEmaneConfigResponse { + map config = 1; +} + +message SetEmaneConfigRequest { + int32 session_id = 1; + map config = 2; +} + +message SetEmaneConfigResponse { + bool result = 1; +} + +message GetEmaneModelsRequest { + int32 session_id = 1; +} + +message GetEmaneModelsResponse { + repeated string models = 1; +} + message GetEmaneModelConfigRequest { int32 session_id = 1; int32 node_id = 2; @@ -24,6 +49,10 @@ message SetEmaneModelConfigResponse { bool result = 1; } +message GetEmaneModelConfigsRequest { + int32 session_id = 1; +} + message GetEmaneModelConfig { int32 node_id = 1; string model = 2; @@ -31,15 +60,12 @@ message GetEmaneModelConfig { map config = 4; } -message NodeEmaneConfig { - int32 iface_id = 1; - string model = 2; - map config = 3; +message GetEmaneModelConfigsResponse { + repeated GetEmaneModelConfig configs = 1; } message GetEmaneEventChannelRequest { int32 session_id = 1; - int32 nem_id = 2; } message GetEmaneEventChannelResponse { diff --git a/daemon/proto/core/api/grpc/mobility.proto b/daemon/proto/core/api/grpc/mobility.proto index 6eaf8fc3..abfad8ef 100644 --- a/daemon/proto/core/api/grpc/mobility.proto +++ b/daemon/proto/core/api/grpc/mobility.proto @@ -17,6 +17,14 @@ message MobilityConfig { map config = 2; } +message GetMobilityConfigsRequest { + int32 session_id = 1; +} + +message GetMobilityConfigsResponse { + map configs = 1; +} + message GetMobilityConfigRequest { int32 session_id = 1; int32 node_id = 2; diff --git a/daemon/proto/core/api/grpc/services.proto b/daemon/proto/core/api/grpc/services.proto index 1b430f99..cf6d9cbf 100644 --- a/daemon/proto/core/api/grpc/services.proto +++ b/daemon/proto/core/api/grpc/services.proto @@ -37,7 +37,7 @@ message ServiceAction { } message ServiceDefaults { - string model = 1; + string node_type = 1; repeated string services = 2; } @@ -66,6 +66,14 @@ message NodeServiceConfig { map files = 4; } +message GetServicesRequest { + +} + +message GetServicesResponse { + repeated Service services = 1; +} + message GetServiceDefaultsRequest { int32 session_id = 1; } @@ -83,6 +91,14 @@ message SetServiceDefaultsResponse { bool result = 1; } +message GetNodeServiceConfigsRequest { + int32 session_id = 1; +} + +message GetNodeServiceConfigsResponse { + repeated NodeServiceConfig configs = 1; +} + message GetNodeServiceRequest { int32 session_id = 1; int32 node_id = 2; @@ -104,6 +120,24 @@ message GetNodeServiceFileResponse { string data = 1; } +message SetNodeServiceRequest { + int32 session_id = 1; + ServiceConfig config = 2; +} + +message SetNodeServiceResponse { + bool result = 1; +} + +message SetNodeServiceFileRequest { + int32 session_id = 1; + ServiceFileConfig config = 2; +} + +message SetNodeServiceFileResponse { + bool result = 1; +} + message ServiceActionRequest { int32 session_id = 1; int32 node_id = 2; diff --git a/daemon/proto/core/api/grpc/wlan.proto b/daemon/proto/core/api/grpc/wlan.proto index 2d161a04..9605d633 100644 --- a/daemon/proto/core/api/grpc/wlan.proto +++ b/daemon/proto/core/api/grpc/wlan.proto @@ -9,6 +9,14 @@ message WlanConfig { map config = 2; } +message GetWlanConfigsRequest { + int32 session_id = 1; +} + +message GetWlanConfigsResponse { + map configs = 1; +} + message GetWlanConfigRequest { int32 session_id = 1; int32 node_id = 2; diff --git a/daemon/pyproject.toml b/daemon/pyproject.toml index 0d1acf7a..6916c197 100644 --- a/daemon/pyproject.toml +++ b/daemon/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "core" -version = "9.0.3" +version = "7.3.0" description = "CORE Common Open Research Emulator" authors = ["Boeing Research and Technology"] license = "BSD-2-Clause" @@ -14,36 +14,29 @@ include = [ ] exclude = ["core/constants.py.in"] -[tool.poetry.scripts] -core-daemon = "core.scripts.daemon:main" -core-cli = "core.scripts.cli:main" -core-gui = "core.scripts.gui:main" -core-player = "core.scripts.player:main" -core-route-monitor = "core.scripts.routemonitor:main" -core-service-update = "core.scripts.serviceupdate:main" -core-cleanup = "core.scripts.cleanup:main" [tool.poetry.dependencies] -python = "^3.9" -fabric = "2.7.1" -grpcio = "1.54.2" -invoke = "1.7.3" -lxml = "4.9.1" +python = "^3.6" +dataclasses = { version = "*", python = "~3.6" } +fabric = "2.5.0" +grpcio = "1.27.2" +invoke = "1.4.1" +lxml = "4.5.1" +mako = "1.1.3" netaddr = "0.7.19" -protobuf = "4.21.9" -pyproj = "3.3.1" -Pillow = "9.4.0" -Mako = "1.2.3" -PyYAML = "6.0.1" +pillow = "7.1.2" +protobuf = "3.12.2" +pyproj = "2.6.1.post1" +pyyaml = "5.3.1" -[tool.poetry.group.dev.dependencies] -pytest = "6.2.5" -grpcio-tools = "1.54.2" -black = "22.12.0" +[tool.poetry.dev-dependencies] +black = "==19.3b0" flake8 = "3.8.2" +grpcio-tools = "1.27.2" isort = "4.3.21" mock = "4.0.2" pre-commit = "2.1.1" +pytest = "5.4.3" [tool.isort] skip_glob = "*_pb2*.py,doc,build" diff --git a/daemon/scripts/core-cleanup b/daemon/scripts/core-cleanup new file mode 100755 index 00000000..8182a917 --- /dev/null +++ b/daemon/scripts/core-cleanup @@ -0,0 +1,70 @@ +#!/bin/sh + +if [ "z$1" = "z-h" -o "z$1" = "z--help" ]; then + echo "usage: $0 [-d [-l]]" + echo -n " Clean up all CORE namespaces processes, bridges, interfaces, " + echo "and session\n directories. Options:" + echo " -h show this help message and exit" + echo " -d also kill the Python daemon" + echo " -l remove the core-daemon.log file" + exit 0 +fi + +if [ `id -u` != 0 ]; then + echo "Permission denied. Re-run this script as root." + exit 1 +fi + +PATH="/sbin:/bin:/usr/sbin:/usr/bin" +export PATH + +if [ "z$1" = "z-d" ]; then + pypids=`pidof python3 python` + for p in $pypids; do + grep -q core-daemon /proc/$p/cmdline + if [ $? = 0 ]; then + echo "cleaning up core-daemon process: $p" + kill -9 $p + fi + done +fi + +if [ "z$2" = "z-l" ]; then + rm -f /var/log/core-daemon.log +fi + +kaopts="-v" +killall --help 2>&1 | grep -q namespace +if [ $? = 0 ]; then + kaopts="$kaopts --ns 0" +fi + +vnodedpids=`pidof vnoded` +if [ "z$vnodedpids" != "z" ]; then + echo "cleaning up old vnoded processes: $vnodedpids" + killall $kaopts -KILL vnoded + # pause for 1 second for interfaces to disappear + sleep 1 +fi +killall -q emane +killall -q emanetransportd +killall -q emaneeventservice + +if [ -d /sys/class/net ]; then + ifcommand="ls -1 /sys/class/net" +else + ifcommand="ip -o link show | sed -r -e 's/[0-9]+: ([^[:space:]]+): .*/\1/'" +fi + +eval "$ifcommand" | awk ' + /^veth[0-9]+\./ {print "removing interface " $1; system("ip link del " $1);} + /tmp\./ {print "removing interface " $1; system("ip link del " $1);} + /gt\./ {print "removing interface " $1; system("ip link del " $1);} + /b\./ {print "removing bridge " $1; system("ip link set " $1 " down; ip link del " $1);} +' + +ebtables -L FORWARD | awk ' + /^-.*b\./ {print "removing ebtables " $0; system("ebtables -D FORWARD " $0); print "removing ebtables chain " $4; system("ebtables -X " $4);} +' + +rm -rf /tmp/pycore* diff --git a/daemon/core/scripts/cli.py b/daemon/scripts/core-cli similarity index 62% rename from daemon/core/scripts/cli.py rename to daemon/scripts/core-cli index 760dbad7..a7571471 100755 --- a/daemon/core/scripts/cli.py +++ b/daemon/scripts/core-cli @@ -1,44 +1,33 @@ -import json +#!/usr/bin/env python3 import sys from argparse import ( ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentTypeError, Namespace, + _SubParsersAction, ) from functools import wraps from pathlib import Path -from typing import Any, Optional +from typing import Any, Optional, Tuple import grpc import netaddr -from google.protobuf.json_format import MessageToDict +from google.protobuf.json_format import MessageToJson from netaddr import EUI, AddrFormatError, IPNetwork from core.api.grpc.client import CoreGrpcClient -from core.api.grpc.wrappers import ( - ConfigOption, +from core.api.grpc.core_pb2 import ( Geo, Interface, - Link, LinkOptions, Node, NodeType, Position, + SessionState, ) -NODE_TYPES = [x.name for x in NodeType if x != NodeType.PEER_TO_PEER] - - -def protobuf_to_json(message: Any) -> dict[str, Any]: - return MessageToDict( - message, including_default_value_fields=True, preserving_proto_field_name=True - ) - - -def print_json(data: Any) -> None: - data = json.dumps(data, indent=2) - print(data) +NODE_TYPES = [k for k, v in NodeType.Enum.items() if v != NodeType.PEER_TO_PEER] def coreclient(func): @@ -82,7 +71,7 @@ def ip6_type(value: str) -> IPNetwork: raise ArgumentTypeError(f"invalid ip6 address: {value}") -def position_type(value: str) -> tuple[float, float]: +def position_type(value: str) -> Tuple[float, float]: error = "invalid position, must be in the format: float,float" try: values = [float(x) for x in value.split(",")] @@ -94,7 +83,7 @@ def position_type(value: str) -> tuple[float, float]: return x, y -def geo_type(value: str) -> tuple[float, float, float]: +def geo_type(value: str) -> Tuple[float, float, float]: error = "invalid geo, must be in the format: float,float,float" try: values = [float(x) for x in value.split(",")] @@ -106,32 +95,35 @@ def geo_type(value: str) -> tuple[float, float, float]: return lon, lat, alt -def file_type(value: str) -> Path: +def file_type(value: str) -> str: path = Path(value) if not path.is_file(): raise ArgumentTypeError(f"invalid file: {value}") - return path + return str(path.absolute()) def get_current_session(core: CoreGrpcClient, session_id: Optional[int]) -> int: if session_id: return session_id - sessions = core.get_sessions() - if not sessions: + response = core.get_sessions() + if not response.sessions: print("no current session to interact with") sys.exit(1) - return sessions[0].id + return response.sessions[0].id -def create_iface( - iface_id: int, mac: str, ip4_net: IPNetwork, ip6_net: IPNetwork -) -> Interface: +def create_iface(iface_id: int, mac: str, ip4_net: IPNetwork, ip6_net: IPNetwork) -> Interface: ip4 = str(ip4_net.ip) if ip4_net else None ip4_mask = ip4_net.prefixlen if ip4_net else None ip6 = str(ip6_net.ip) if ip6_net else None ip6_mask = ip6_net.prefixlen if ip6_net else None return Interface( - id=iface_id, mac=mac, ip4=ip4, ip4_mask=ip4_mask, ip6=ip6, ip6_mask=ip6_mask + id=iface_id, + mac=mac, + ip4=ip4, + ip4_mask=ip4_mask, + ip6=ip6, + ip6_mask=ip6_mask, ) @@ -145,18 +137,23 @@ def print_iface(iface: Interface) -> None: print(f"{iface.id:<3} | {iface.mac:<17} | {iface_ip4:<18} | {iface_ip6}") +def print_json(message: Any) -> None: + json = MessageToJson(message, preserving_proto_field_name=True) + print(json) + + @coreclient def get_wlan_config(core: CoreGrpcClient, args: Namespace) -> None: session_id = get_current_session(core, args.session) - config = core.get_wlan_config(session_id, args.node) + response = core.get_wlan_config(session_id, args.node) if args.json: - print_json(ConfigOption.to_dict(config)) + print_json(response) else: size = 0 - for option in config.values(): + for option in response.config.values(): size = max(size, len(option.name)) print(f"{'Name':<{size}.{size}} | Value") - for option in config.values(): + for option in response.config.values(): print(f"{option.name:<{size}.{size}} | {option.value}") @@ -174,62 +171,61 @@ def set_wlan_config(core: CoreGrpcClient, args: Namespace) -> None: config["jitter"] = str(args.jitter) if args.range: config["range"] = str(args.range) - result = core.set_wlan_config(session_id, args.node, config) + response = core.set_wlan_config(session_id, args.node, config) if args.json: - print_json(dict(result=result)) + print_json(response) else: - print(f"set wlan config: {result}") + print(f"set wlan config: {response.result}") @coreclient def open_xml(core: CoreGrpcClient, args: Namespace) -> None: - result, session_id = core.open_xml(args.file, args.start) + response = core.open_xml(args.file, args.start) if args.json: - print_json(dict(result=result, session_id=session_id)) + print_json(response) else: - print(f"opened xml: {result},{session_id}") + print(f"opened xml: {response.result}") @coreclient def query_sessions(core: CoreGrpcClient, args: Namespace) -> None: - sessions = core.get_sessions() + response = core.get_sessions() if args.json: - sessions = [protobuf_to_json(x.to_proto()) for x in sessions] - print_json(sessions) + print_json(response) else: print("Session ID | Session State | Nodes") - for session in sessions: - print(f"{session.id:<10} | {session.state.name:<13} | {session.nodes}") + for s in response.sessions: + state = SessionState.Enum.Name(s.state) + print(f"{s.id:<10} | {state:<13} | {s.nodes}") @coreclient def query_session(core: CoreGrpcClient, args: Namespace) -> None: - session = core.get_session(args.id) + response = core.get_session(args.id) if args.json: - session = protobuf_to_json(session.to_proto()) - print_json(session) + print_json(response) else: print("Nodes") - print("ID | Name | Type | XY | Geo") - for node in session.nodes.values(): - xy_pos = f"{int(node.position.x)},{int(node.position.y)}" - geo_pos = f"{node.geo.lon:.7f},{node.geo.lat:.7f},{node.geo.alt:f}" - print( - f"{node.id:<7} | {node.name[:7]:<7} | {node.type.name[:7]:<7} | {xy_pos:<9} | {geo_pos}" - ) + print("Node ID | Node Name | Node Type") + names = {} + for node in response.session.nodes: + names[node.id] = node.name + node_type = NodeType.Enum.Name(node.type) + print(f"{node.id:<7} | {node.name:<9} | {node_type}") + print("\nLinks") - for link in session.links: - n1 = session.nodes[link.node1_id].name - n2 = session.nodes[link.node2_id].name - print("Node | ", end="") + for link in response.session.links: + n1 = names[link.node1_id] + n2 = names[link.node2_id] + print(f"Node | ", end="") print_iface_header() print(f"{n1:<6} | ", end="") - if link.iface1: + if link.HasField("iface1"): print_iface(link.iface1) else: print() print(f"{n2:<6} | ", end="") - if link.iface2: + if link.HasField("iface2"): print_iface(link.iface2) else: print() @@ -238,49 +234,38 @@ def query_session(core: CoreGrpcClient, args: Namespace) -> None: @coreclient def query_node(core: CoreGrpcClient, args: Namespace) -> None: - session = core.get_session(args.id) - node, ifaces, _ = core.get_node(args.id, args.node) + names = {} + response = core.get_session(args.id) + for node in response.session.nodes: + names[node.id] = node.name + + response = core.get_node(args.id, args.node) if args.json: - node = protobuf_to_json(node.to_proto()) - ifaces = [protobuf_to_json(x.to_proto()) for x in ifaces] - print_json(dict(node=node, ifaces=ifaces)) + print_json(response) else: - print("ID | Name | Type | XY | Geo") - xy_pos = f"{int(node.position.x)},{int(node.position.y)}" - geo_pos = f"{node.geo.lon:.7f},{node.geo.lat:.7f},{node.geo.alt:f}" - print( - f"{node.id:<7} | {node.name[:7]:<7} | {node.type.name[:7]:<7} | {xy_pos:<9} | {geo_pos}" - ) - if ifaces: - print("Interfaces") - print("Connected To | ", end="") - print_iface_header() - for iface in ifaces: - if iface.net_id == node.id: - if iface.node_id: - name = session.nodes[iface.node_id].name - else: - name = session.nodes[iface.net2_id].name + node = response.node + node_type = NodeType.Enum.Name(node.type) + print("ID | Name | Type") + print(f"{node.id:<4} | {node.name:<7} | {node_type}") + print("Interfaces") + print("Connected To | ", end="") + print_iface_header() + for iface in response.ifaces: + if iface.net_id == node.id: + if iface.node_id: + name = names[iface.node_id] else: - net_node = session.nodes.get(iface.net_id) - name = net_node.name if net_node else "" - print(f"{name:<12} | ", end="") - print_iface(iface) - - -@coreclient -def delete_session(core: CoreGrpcClient, args: Namespace) -> None: - result = core.delete_session(args.id) - if args.json: - print_json(dict(result=result)) - else: - print(f"delete session({args.id}): {result}") + name = names[iface.net2_id] + else: + name = names.get(iface.net_id, "") + print(f"{name:<12} | ", end="") + print_iface(iface) @coreclient def add_node(core: CoreGrpcClient, args: Namespace) -> None: session_id = get_current_session(core, args.session) - node_type = NodeType[args.type] + node_type = NodeType.Enum.Value(args.type) pos = None if args.pos: x, y = args.pos @@ -300,25 +285,15 @@ def add_node(core: CoreGrpcClient, args: Namespace) -> None: position=pos, geo=geo, ) - node_id = core.add_node(session_id, node) + response = core.add_node(session_id, node) if args.json: - print_json(dict(node_id=node_id)) + print_json(response) else: - print(f"created node: {node_id}") + print(f"created node: {response.node_id}") @coreclient def edit_node(core: CoreGrpcClient, args: Namespace) -> None: - session_id = get_current_session(core, args.session) - result = core.edit_node(session_id, args.id, args.icon) - if args.json: - print_json(dict(result=result)) - else: - print(f"edit node: {result}") - - -@coreclient -def move_node(core: CoreGrpcClient, args: Namespace) -> None: session_id = get_current_session(core, args.session) pos = None if args.pos: @@ -328,21 +303,21 @@ def move_node(core: CoreGrpcClient, args: Namespace) -> None: if args.geo: lon, lat, alt = args.geo geo = Geo(lon=lon, lat=lat, alt=alt) - result = core.move_node(session_id, args.id, pos, geo) + response = core.edit_node(session_id, args.id, pos, args.icon, geo) if args.json: - print_json(dict(result=result)) + print_json(response) else: - print(f"move node: {result}") + print(f"edit node: {response.result}") @coreclient def delete_node(core: CoreGrpcClient, args: Namespace) -> None: session_id = get_current_session(core, args.session) - result = core.delete_node(session_id, args.id) + response = core.delete_node(session_id, args.id) if args.json: - print_json(dict(result=result)) + print_json(response) else: - print(f"deleted node: {result}") + print(f"deleted node: {response.result}") @coreclient @@ -350,14 +325,10 @@ def add_link(core: CoreGrpcClient, args: Namespace) -> None: session_id = get_current_session(core, args.session) iface1 = None if args.iface1_id is not None: - iface1 = create_iface( - args.iface1_id, args.iface1_mac, args.iface1_ip4, args.iface1_ip6 - ) + iface1 = create_iface(args.iface1_id, args.iface1_mac, args.iface1_ip4, args.iface1_ip6) iface2 = None if args.iface2_id is not None: - iface2 = create_iface( - args.iface2_id, args.iface2_mac, args.iface2_ip4, args.iface2_ip6 - ) + iface2 = create_iface(args.iface2_id, args.iface2_mac, args.iface2_ip4, args.iface2_ip6) options = LinkOptions( bandwidth=args.bandwidth, loss=args.loss, @@ -366,14 +337,11 @@ def add_link(core: CoreGrpcClient, args: Namespace) -> None: dup=args.duplicate, unidirectional=args.uni, ) - link = Link(args.node1, args.node2, iface1=iface1, iface2=iface2, options=options) - result, iface1, iface2 = core.add_link(session_id, link) + response = core.add_link(session_id, args.node1, args.node2, iface1, iface2, options) if args.json: - iface1 = protobuf_to_json(iface1.to_proto()) - iface2 = protobuf_to_json(iface2.to_proto()) - print_json(dict(result=result, iface1=iface1, iface2=iface2)) + print_json(response) else: - print(f"add link: {result}") + print(f"add link: {response.result}") @coreclient @@ -387,43 +355,26 @@ def edit_link(core: CoreGrpcClient, args: Namespace) -> None: dup=args.duplicate, unidirectional=args.uni, ) - iface1 = Interface(args.iface1) - iface2 = Interface(args.iface2) - link = Link(args.node1, args.node2, iface1=iface1, iface2=iface2, options=options) - result = core.edit_link(session_id, link) + response = core.edit_link( + session_id, args.node1, args.node2, options, args.iface1, args.iface2 + ) if args.json: - print_json(dict(result=result)) + print_json(response) else: - print(f"edit link: {result}") + print(f"edit link: {response.result}") @coreclient def delete_link(core: CoreGrpcClient, args: Namespace) -> None: session_id = get_current_session(core, args.session) - iface1 = Interface(args.iface1) - iface2 = Interface(args.iface2) - link = Link(args.node1, args.node2, iface1=iface1, iface2=iface2) - result = core.delete_link(session_id, link) + response = core.delete_link(session_id, args.node1, args.node2, args.iface1, args.iface2) if args.json: - print_json(dict(result=result)) + print_json(response) else: - print(f"delete link: {result}") + print(f"delete link: {response.result}") -def setup_sessions_parser(parent) -> None: - parser = parent.add_parser("session", help="session interactions") - parser.formatter_class = ArgumentDefaultsHelpFormatter - parser.add_argument("-i", "--id", type=int, help="session id to use", required=True) - subparsers = parser.add_subparsers(help="session commands") - subparsers.required = True - subparsers.dest = "command" - - delete_parser = subparsers.add_parser("delete", help="delete a session") - delete_parser.formatter_class = ArgumentDefaultsHelpFormatter - delete_parser.set_defaults(func=delete_session) - - -def setup_node_parser(parent) -> None: +def setup_node_parser(parent: _SubParsersAction) -> None: parser = parent.add_parser("node", help="node interactions") parser.formatter_class = ArgumentDefaultsHelpFormatter parser.add_argument("-s", "--session", type=int, help="session to interact with") @@ -438,34 +389,23 @@ def setup_node_parser(parent) -> None: add_parser.add_argument( "-t", "--type", choices=NODE_TYPES, default="DEFAULT", help="type of node" ) - add_parser.add_argument( - "-m", "--model", help="used to determine services, optional" - ) + add_parser.add_argument("-m", "--model", help="used to determine services, optional") group = add_parser.add_mutually_exclusive_group(required=True) group.add_argument("-p", "--pos", type=position_type, help="x,y position") group.add_argument("-g", "--geo", type=geo_type, help="lon,lat,alt position") add_parser.add_argument("-ic", "--icon", help="icon to use, optional") add_parser.add_argument("-im", "--image", help="container image, optional") - add_parser.add_argument( - "-e", "--emane", help="emane model, only required for emane nodes" - ) + add_parser.add_argument("-e", "--emane", help="emane model, only required for emane nodes") add_parser.set_defaults(func=add_node) edit_parser = subparsers.add_parser("edit", help="edit a node") edit_parser.formatter_class = ArgumentDefaultsHelpFormatter - edit_parser.add_argument("-i", "--id", type=int, help="id to use", required=True) - edit_parser.add_argument("-ic", "--icon", help="icon to use, optional") - edit_parser.set_defaults(func=edit_node) - - move_parser = subparsers.add_parser("move", help="move a node") - move_parser.formatter_class = ArgumentDefaultsHelpFormatter - move_parser.add_argument( - "-i", "--id", type=int, help="id to use, optional", required=True - ) - group = move_parser.add_mutually_exclusive_group(required=True) + edit_parser.add_argument("-i", "--id", type=int, help="id to use, optional") + group = edit_parser.add_mutually_exclusive_group(required=True) group.add_argument("-p", "--pos", type=position_type, help="x,y position") group.add_argument("-g", "--geo", type=geo_type, help="lon,lat,alt position") - move_parser.set_defaults(func=move_node) + edit_parser.add_argument("-ic", "--icon", help="icon to use, optional") + edit_parser.set_defaults(func=edit_node) delete_parser = subparsers.add_parser("delete", help="delete a node") delete_parser.formatter_class = ArgumentDefaultsHelpFormatter @@ -473,7 +413,7 @@ def setup_node_parser(parent) -> None: delete_parser.set_defaults(func=delete_node) -def setup_link_parser(parent) -> None: +def setup_link_parser(parent: _SubParsersAction) -> None: parser = parent.add_parser("link", help="link interactions") parser.formatter_class = ArgumentDefaultsHelpFormatter parser.add_argument("-s", "--session", type=int, help="session to interact with") @@ -486,33 +426,19 @@ def setup_link_parser(parent) -> None: add_parser.add_argument("-n1", "--node1", type=int, help="node1 id", required=True) add_parser.add_argument("-n2", "--node2", type=int, help="node2 id", required=True) add_parser.add_argument("-i1-i", "--iface1-id", type=int, help="node1 interface id") - add_parser.add_argument( - "-i1-m", "--iface1-mac", type=mac_type, help="node1 interface mac" - ) - add_parser.add_argument( - "-i1-4", "--iface1-ip4", type=ip4_type, help="node1 interface ip4" - ) - add_parser.add_argument( - "-i1-6", "--iface1-ip6", type=ip6_type, help="node1 interface ip6" - ) + add_parser.add_argument("-i1-m", "--iface1-mac", type=mac_type, help="node1 interface mac") + add_parser.add_argument("-i1-4", "--iface1-ip4", type=ip4_type, help="node1 interface ip4") + add_parser.add_argument("-i1-6", "--iface1-ip6", type=ip6_type, help="node1 interface ip6") add_parser.add_argument("-i2-i", "--iface2-id", type=int, help="node2 interface id") - add_parser.add_argument( - "-i2-m", "--iface2-mac", type=mac_type, help="node2 interface mac" - ) - add_parser.add_argument( - "-i2-4", "--iface2-ip4", type=ip4_type, help="node2 interface ip4" - ) - add_parser.add_argument( - "-i2-6", "--iface2-ip6", type=ip6_type, help="node2 interface ip6" - ) + add_parser.add_argument("-i2-m", "--iface2-mac", type=mac_type, help="node2 interface mac") + add_parser.add_argument("-i2-4", "--iface2-ip4", type=ip4_type, help="node2 interface ip4") + add_parser.add_argument("-i2-6", "--iface2-ip6", type=ip6_type, help="node2 interface ip6") add_parser.add_argument("-b", "--bandwidth", type=int, help="bandwidth (bps)") add_parser.add_argument("-l", "--loss", type=float, help="loss (%%)") add_parser.add_argument("-j", "--jitter", type=int, help="jitter (us)") add_parser.add_argument("-de", "--delay", type=int, help="delay (us)") add_parser.add_argument("-du", "--duplicate", type=int, help="duplicate (%%)") - add_parser.add_argument( - "-u", "--uni", action="store_true", help="is link unidirectional?" - ) + add_parser.add_argument("-u", "--uni", action="store_true", help="is link unidirectional?") add_parser.set_defaults(func=add_link) edit_parser = subparsers.add_parser("edit", help="edit a link") @@ -533,18 +459,14 @@ def setup_link_parser(parent) -> None: delete_parser = subparsers.add_parser("delete", help="delete a link") delete_parser.formatter_class = ArgumentDefaultsHelpFormatter - delete_parser.add_argument( - "-n1", "--node1", type=int, help="node1 id", required=True - ) - delete_parser.add_argument( - "-n2", "--node2", type=int, help="node1 id", required=True - ) + delete_parser.add_argument("-n1", "--node1", type=int, help="node1 id", required=True) + delete_parser.add_argument("-n2", "--node2", type=int, help="node1 id", required=True) delete_parser.add_argument("-i1", "--iface1", type=int, help="node1 interface id") delete_parser.add_argument("-i2", "--iface2", type=int, help="node2 interface id") delete_parser.set_defaults(func=delete_link) -def setup_query_parser(parent) -> None: +def setup_query_parser(parent: _SubParsersAction) -> None: parser = parent.add_parser("query", help="query interactions") subparsers = parser.add_subparsers(help="query commands") subparsers.required = True @@ -556,33 +478,25 @@ def setup_query_parser(parent) -> None: session_parser = subparsers.add_parser("session", help="query session") session_parser.formatter_class = ArgumentDefaultsHelpFormatter - session_parser.add_argument( - "-i", "--id", type=int, help="session to query", required=True - ) + session_parser.add_argument("-i", "--id", type=int, help="session to query", required=True) session_parser.set_defaults(func=query_session) node_parser = subparsers.add_parser("node", help="query node") node_parser.formatter_class = ArgumentDefaultsHelpFormatter - node_parser.add_argument( - "-i", "--id", type=int, help="session to query", required=True - ) - node_parser.add_argument( - "-n", "--node", type=int, help="node to query", required=True - ) + node_parser.add_argument("-i", "--id", type=int, help="session to query", required=True) + node_parser.add_argument("-n", "--node", type=int, help="node to query", required=True) node_parser.set_defaults(func=query_node) -def setup_xml_parser(parent) -> None: +def setup_xml_parser(parent: _SubParsersAction) -> None: parser = parent.add_parser("xml", help="open session xml") parser.formatter_class = ArgumentDefaultsHelpFormatter - parser.add_argument( - "-f", "--file", type=file_type, help="xml file to open", required=True - ) + parser.add_argument("-f", "--file", type=file_type, help="xml file to open", required=True) parser.add_argument("-s", "--start", action="store_true", help="start the session?") parser.set_defaults(func=open_xml) -def setup_wlan_parser(parent) -> None: +def setup_wlan_parser(parent: _SubParsersAction) -> None: parser = parent.add_parser("wlan", help="wlan specific interactions") parser.formatter_class = ArgumentDefaultsHelpFormatter parser.add_argument("-s", "--session", type=int, help="session to interact with") @@ -614,7 +528,6 @@ def main() -> None: subparsers = parser.add_subparsers(help="supported commands") subparsers.required = True subparsers.dest = "command" - setup_sessions_parser(subparsers) setup_node_parser(subparsers) setup_link_parser(subparsers) setup_query_parser(subparsers) diff --git a/daemon/scripts/core-daemon b/daemon/scripts/core-daemon new file mode 100755 index 00000000..16b0ac59 --- /dev/null +++ b/daemon/scripts/core-daemon @@ -0,0 +1,160 @@ +#!/usr/bin/env python3 +""" +core-daemon: the CORE daemon is a server process that receives CORE API +messages and instantiates emulated nodes and networks within the kernel. Various +message handlers are defined and some support for sending messages. +""" + +import argparse +import logging +import os +import sys +import threading +import time +from configparser import ConfigParser + +from core import constants +from core.api.grpc.server import CoreGrpcServer +from core.api.tlv.corehandlers import CoreHandler, CoreUdpHandler +from core.api.tlv.coreserver import CoreServer, CoreUdpServer +from core.api.tlv.enumerations import CORE_API_PORT +from core.constants import CORE_CONF_DIR, COREDPY_VERSION +from core.utils import close_onexec, load_logging_config + + +def banner(): + """ + Output the program banner printed to the terminal or log file. + + :return: nothing + """ + logging.info("CORE daemon v.%s started %s", constants.COREDPY_VERSION, time.ctime()) + + +def start_udp(mainserver, server_address): + """ + Start a thread running a UDP server on the same host,port for + connectionless requests. + + :param CoreServer mainserver: main core tcp server to piggy back off of + :param server_address: + :return: CoreUdpServer + """ + mainserver.udpserver = CoreUdpServer(server_address, CoreUdpHandler, mainserver) + mainserver.udpthread = threading.Thread(target=mainserver.udpserver.start, daemon=True) + mainserver.udpthread.start() + + +def cored(cfg): + """ + Start the CoreServer object and enter the server loop. + + :param dict cfg: core configuration + :return: nothing + """ + host = cfg["listenaddr"] + port = int(cfg["port"]) + if host == "" or host is None: + host = "localhost" + + try: + address = (host, port) + server = CoreServer(address, CoreHandler, cfg) + except: + logging.exception("error starting main server on: %s:%s", host, port) + sys.exit(1) + + # initialize grpc api + grpc_server = CoreGrpcServer(server.coreemu) + address_config = cfg["grpcaddress"] + port_config = cfg["grpcport"] + grpc_address = f"{address_config}:{port_config}" + grpc_thread = threading.Thread(target=grpc_server.listen, args=(grpc_address,), daemon=True) + grpc_thread.start() + + # start udp server + start_udp(server, address) + + # close handlers + close_onexec(server.fileno()) + + logging.info("CORE TLV API TCP/UDP listening on: %s:%s", host, port) + server.serve_forever() + + +def get_merged_config(filename): + """ + Return a configuration after merging config file and command-line arguments. + + :param str filename: file name to merge configuration settings with + :return: merged configuration + :rtype: dict + """ + # these are the defaults used in the config file + default_log = os.path.join(constants.CORE_CONF_DIR, "logging.conf") + default_grpc_port = "50051" + default_address = "localhost" + defaults = { + "port": str(CORE_API_PORT), + "listenaddr": default_address, + "grpcport": default_grpc_port, + "grpcaddress": default_address, + "logfile": default_log + } + + parser = argparse.ArgumentParser( + description=f"CORE daemon v.{COREDPY_VERSION} instantiates Linux network namespace nodes.") + parser.add_argument("-f", "--configfile", dest="configfile", + help=f"read config from specified file; default = {filename}") + parser.add_argument("-p", "--port", dest="port", type=int, + help=f"port number to listen on; default = {CORE_API_PORT}") + parser.add_argument("--ovs", action="store_true", help="enable experimental ovs mode, default is false") + parser.add_argument("--grpc-port", dest="grpcport", + help=f"grpc port to listen on; default {default_grpc_port}") + parser.add_argument("--grpc-address", dest="grpcaddress", + help=f"grpc address to listen on; default {default_address}") + parser.add_argument("-l", "--logfile", help=f"core logging configuration; default {default_log}") + + # parse command line options + args = parser.parse_args() + + # convert ovs to internal format + args.ovs = "1" if args.ovs else "0" + + # read the config file + if args.configfile is not None: + filename = args.configfile + del args.configfile + cfg = ConfigParser(defaults) + cfg.read(filename) + + section = "core-daemon" + if not cfg.has_section(section): + cfg.add_section(section) + + # merge argparse with configparser + for opt in vars(args): + val = getattr(args, opt) + if val is not None: + cfg.set(section, opt, str(val)) + + return dict(cfg.items(section)) + + +def main(): + """ + Main program startup. + + :return: nothing + """ + cfg = get_merged_config(f"{CORE_CONF_DIR}/core.conf") + load_logging_config(cfg["logfile"]) + banner() + try: + cored(cfg) + except KeyboardInterrupt: + logging.info("keyboard interrupt, stopping core daemon") + + +if __name__ == "__main__": + main() diff --git a/daemon/scripts/core-imn-to-xml b/daemon/scripts/core-imn-to-xml new file mode 100755 index 00000000..495093ed --- /dev/null +++ b/daemon/scripts/core-imn-to-xml @@ -0,0 +1,70 @@ +#!/usr/bin/env python3 +import argparse +import re +import sys +from pathlib import Path + +from core import utils +from core.api.grpc.client import CoreGrpcClient +from core.errors import CoreCommandError + +if __name__ == "__main__": + # parse flags + parser = argparse.ArgumentParser(description="Converts CORE imn files to xml") + parser.add_argument("-f", "--file", dest="file", help="imn file to convert") + parser.add_argument( + "-d", "--dest", dest="dest", default=None, help="destination for xml file, defaults to same location as imn" + ) + args = parser.parse_args() + + # validate provided file exists + imn_file = Path(args.file) + if not imn_file.exists(): + print(f"{args.file} does not exist") + sys.exit(1) + + # validate destination + if args.dest is not None: + dest = Path(args.dest) + if not dest.exists() or not dest.is_dir(): + print(f"{dest.resolve()} does not exist or is not a directory") + sys.exit(1) + xml_file = Path(dest, imn_file.with_suffix(".xml").name) + else: + xml_file = Path(imn_file.with_suffix(".xml").name) + + # validate xml file + if xml_file.exists(): + print(f"{xml_file.resolve()} already exists") + sys.exit(1) + + # run provided imn using core-gui batch mode + try: + print(f"running {imn_file.resolve()} in batch mode") + output = utils.cmd(f"core-gui --batch {imn_file.resolve()}") + last_line = output.split("\n")[-1].strip() + + # check for active session + if last_line == "Another session is active.": + print("need to restart core-daemon or shutdown previous batch session") + sys.exit(1) + + # parse session id + m = re.search(r"Session id is (\d+)\.", last_line) + if not m: + print(f"failed to find session id: {output}") + sys.exit(1) + session_id = int(m.group(1)) + print(f"created session {session_id}") + + # save xml and delete session + client = CoreGrpcClient() + with client.context_connect(): + print(f"saving xml {xml_file.resolve()}") + client.save_xml(session_id, xml_file) + + print(f"deleting session {session_id}") + client.delete_session(session_id) + except CoreCommandError as e: + print(f"core-gui batch failed for {imn_file.resolve()}: {e}") + sys.exit(1) diff --git a/daemon/scripts/core-manage b/daemon/scripts/core-manage new file mode 100755 index 00000000..5587c9ae --- /dev/null +++ b/daemon/scripts/core-manage @@ -0,0 +1,247 @@ +#!/usr/bin/env python3 +""" +core-manage: Helper tool to add, remove, or check for services, models, and +node types in a CORE installation. +""" + +import ast +import optparse +import os +import re +import sys + +from core import services +from core.constants import CORE_CONF_DIR + + +class FileUpdater: + """ + Helper class for changing configuration files. + """ + actions = ("add", "remove", "check") + targets = ("service", "model", "nodetype") + + def __init__(self, action, target, data, options): + """ + """ + self.action = action + self.target = target + self.data = data + self.options = options + self.verbose = options.verbose + self.search, self.filename = self.get_filename(target) + + def process(self): + """ Invoke update_file() using a helper method depending on target. + """ + if self.verbose: + txt = "Updating" + if self.action == "check": + txt = "Checking" + sys.stdout.write(f"{txt} file: {self.filename}\n") + + if self.target == "service": + r = self.update_file(fn=self.update_services) + elif self.target == "model": + r = self.update_file(fn=self.update_emane_models) + elif self.target == "nodetype": + r = self.update_nodes_conf() + + if self.verbose: + txt = "" + if not r: + txt = "NOT " + if self.action == "check": + sys.stdout.write(f"String {txt} found.\n") + else: + sys.stdout.write(f"File {txt} updated.\n") + + return r + + def update_services(self, line): + """ Modify the __init__.py file having this format: + __all__ = ["quagga", "nrl", "xorp", "bird", ] + Returns True or False when "check" is the action, a modified line + otherwise. + """ + line = line.strip("\n") + key, valstr = line.split("= ") + vals = ast.literal_eval(valstr) + r = self.update_keyvals(key, vals) + if self.action == "check": + return r + valstr = str(r) + return "= ".join([key, valstr]) + "\n" + + def update_emane_models(self, line): + """ Modify the core.conf file having this format: + emane_models = RfPipe, Ieee80211abg, CommEffect, Bypass + Returns True or False when "check" is the action, a modified line + otherwise. + """ + line = line.strip("\n") + key, valstr = line.split("= ") + vals = valstr.split(", ") + r = self.update_keyvals(key, vals) + if self.action == "check": + return r + valstr = ", ".join(r) + return "= ".join([key, valstr]) + "\n" + + def update_keyvals(self, key, vals): + """ Perform self.action on (key, vals). + Returns True or False when "check" is the action, a modified line + otherwise. + """ + if self.action == "check": + if self.data in vals: + return True + else: + return False + elif self.action == "add": + if self.data not in vals: + vals.append(self.data) + elif self.action == "remove": + try: + vals.remove(self.data) + except ValueError: + pass + return vals + + def get_filename(self, target): + """ Return search string and filename based on target. + """ + if target == "service": + filename = os.path.abspath(services.__file__) + search = "__all__ =" + elif target == "model": + filename = os.path.join(CORE_CONF_DIR, "core.conf") + search = "emane_models =" + elif target == "nodetype": + if self.options.userpath is None: + raise ValueError("missing user path") + filename = os.path.join(self.options.userpath, "nodes.conf") + search = self.data + else: + raise ValueError("unknown target") + if not os.path.exists(filename): + raise ValueError(f"file {filename} does not exist") + return search, filename + + def update_file(self, fn=None): + """ Open a file and search for self.search, invoking the supplied + function on the matching line. Write file changes if necessary. + Returns True if the file has changed (or action is "check" and the + search string is found), False otherwise. + """ + changed = False + output = "" # this accumulates output, assumes input is small + with open(self.filename, "r") as f: + for line in f: + if line[:len(self.search)] == self.search: + r = fn(line) # line may be modified by fn() here + if self.action == "check": + return r + else: + if line != r: + changed = True + line = r + output += line + if changed: + with open(self.filename, "w") as f: + f.write(output) + + return changed + + def update_nodes_conf(self): + """ Add/remove/check entries from nodes.conf. This file + contains a Tcl-formatted array of node types. The array index must be + properly set for new entries. Uses self.{action, filename, search, + data} variables as input and returns the same value as update_file(). + """ + changed = False + output = "" # this accumulates output, assumes input is small + with open(self.filename, "r") as f: + for line in f: + # make sure data is not added twice + if line.find(self.search) >= 0: + if self.action == "check": + return True + elif self.action == "add": + return False + elif self.action == "remove": + changed = True + continue + else: + output += line + + if self.action == "add": + index = int(re.match("^\d+", line).group(0)) + output += str(index + 1) + " " + self.data + "\n" + changed = True + if changed: + with open(self.filename, "w") as f: + f.write(output) + + return changed + + +def main(): + actions = ", ".join(FileUpdater.actions) + targets = ", ".join(FileUpdater.targets) + usagestr = "usage: %prog [-h] [options] \n" + usagestr += "\nHelper tool to add, remove, or check for " + usagestr += "services, models, and node types\nin a CORE installation.\n" + usagestr += "\nExamples:\n %prog add service newrouting" + usagestr += "\n %prog -v check model RfPipe" + usagestr += "\n %prog --userpath=\"$HOME/.core\" add nodetype \"{ftp ftp.gif ftp.gif {DefaultRoute FTP} netns {FTP server} }\" \n" + usagestr += f"\nArguments:\n should be one of: {actions}" + usagestr += f"\n should be one of: {targets}" + usagestr += f"\n is the text to {actions}" + parser = optparse.OptionParser(usage=usagestr) + parser.set_defaults(userpath=None, verbose=False, ) + + parser.add_option("--userpath", dest="userpath", type="string", + help="use the specified user path (e.g. \"$HOME/.core" \ + "\") to access nodes.conf") + parser.add_option("-v", "--verbose", dest="verbose", action="store_true", + help="be verbose when performing action") + + def usage(msg=None, err=0): + sys.stdout.write("\n") + if msg: + sys.stdout.write(msg + "\n\n") + parser.print_help() + sys.exit(err) + + (options, args) = parser.parse_args() + + if len(args) != 3: + usage("Missing required arguments!", 1) + + action = args[0] + if action not in FileUpdater.actions: + usage(f"invalid action {action}", 1) + + target = args[1] + if target not in FileUpdater.targets: + usage(f"invalid target {target}", 1) + + if target == "nodetype" and not options.userpath: + usage(f"user path option required for this target ({target})") + + data = args[2] + + try: + up = FileUpdater(action, target, data, options) + r = up.process() + except Exception as e: + sys.stderr.write(f"Exception: {e}\n") + sys.exit(1) + if not r: + sys.exit(1) + sys.exit(0) + + +if __name__ == "__main__": + main() diff --git a/daemon/core/scripts/gui.py b/daemon/scripts/core-pygui similarity index 52% rename from daemon/core/scripts/gui.py rename to daemon/scripts/core-pygui index 9c0560b2..888f4171 100755 --- a/daemon/core/scripts/gui.py +++ b/daemon/scripts/core-pygui @@ -1,50 +1,33 @@ +#!/usr/bin/env python3 import argparse import logging from logging.handlers import TimedRotatingFileHandler -from core.gui import appconfig, images +from core.gui import appconfig from core.gui.app import Application +from core.gui.images import Images - -def main() -> None: +if __name__ == "__main__": # parse flags - parser = argparse.ArgumentParser(description="CORE Python GUI") - parser.add_argument( - "-l", - "--level", - choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"], - default="INFO", - help="logging level", - ) + parser = argparse.ArgumentParser(description=f"CORE Python GUI") + parser.add_argument("-l", "--level", choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"], default="INFO", + help="logging level") parser.add_argument("-p", "--proxy", action="store_true", help="enable proxy") parser.add_argument("-s", "--session", type=int, help="session id to join") - parser.add_argument( - "--create-dir", action="store_true", help="create gui directory and exit" - ) args = parser.parse_args() # check home directory exists and create if necessary appconfig.check_directory() - if args.create_dir: - return # setup logging log_format = "%(asctime)s - %(levelname)s - %(module)s:%(funcName)s - %(message)s" stream_handler = logging.StreamHandler() - file_handler = TimedRotatingFileHandler( - filename=appconfig.LOG_PATH, when="D", backupCount=5 - ) + file_handler = TimedRotatingFileHandler(filename=appconfig.LOG_PATH, when="D", backupCount=5) log_level = logging.getLevelName(args.level) - logging.basicConfig( - level=log_level, format=log_format, handlers=[stream_handler, file_handler] - ) + logging.basicConfig(level=log_level, format=log_format, handlers=[stream_handler, file_handler]) logging.getLogger("PIL").setLevel(logging.ERROR) # start app - images.load_all() + Images.load_all() app = Application(args.proxy, args.session) app.mainloop() - - -if __name__ == "__main__": - main() diff --git a/daemon/core/scripts/routemonitor.py b/daemon/scripts/core-route-monitor similarity index 92% rename from daemon/core/scripts/routemonitor.py rename to daemon/scripts/core-route-monitor index 42fbf3a9..d644ae1b 100755 --- a/daemon/core/scripts/routemonitor.py +++ b/daemon/scripts/core-route-monitor @@ -1,3 +1,4 @@ +#!/usr/bin/env python3 import argparse import enum import select @@ -9,12 +10,13 @@ from argparse import ArgumentDefaultsHelpFormatter from functools import cmp_to_key from queue import Queue from threading import Thread +from typing import Dict, Tuple import grpc from core import utils from core.api.grpc.client import CoreGrpcClient -from core.api.grpc.wrappers import NodeType +from core.api.grpc.core_pb2 import NodeType SDT_HOST = "127.0.0.1" SDT_PORT = 50000 @@ -30,7 +32,7 @@ class RouteEnum(enum.Enum): class SdtClient: - def __init__(self, address: tuple[str, int]) -> None: + def __init__(self, address: Tuple[str, int]) -> None: self.sock = socket.create_connection(address) self.links = [] self.send(f'layer "{ROUTE_LAYER}"') @@ -84,21 +86,22 @@ class RouterMonitor: self.sdt = SdtClient((sdt_host, sdt_port)) self.nodes = self.get_nodes() - def get_nodes(self) -> dict[int, str]: + def get_nodes(self) -> Dict[int, str]: with self.core.context_connect(): if self.session is None: self.session = self.get_session() print("session: ", self.session) try: - session = self.core.get_session(self.session) + response = self.core.get_session(self.session) + nodes = response.session.nodes node_map = {} - for node in session.nodes.values(): + for node in nodes: if node.type != NodeType.DEFAULT: continue node_map[node.id] = node.channel if self.src_id is None: - _, ifaces, _ = self.core.get_node(self.session, node.id) - for iface in ifaces: + response = self.core.get_node(self.session, node.id) + for iface in response.ifaces: if self.src == iface.ip4: self.src_id = node.id break @@ -114,7 +117,8 @@ class RouterMonitor: return node_map def get_session(self) -> int: - sessions = self.core.get_sessions() + response = self.core.get_sessions() + sessions = response.sessions session = None if sessions: session = sessions[0] @@ -145,7 +149,7 @@ class RouterMonitor: self.manage_routes() self.route_time = time.monotonic() - def route_sort(self, x: tuple[str, int], y: tuple[str, int]) -> int: + def route_sort(self, x: Tuple[str, int], y: Tuple[str, int]) -> int: x_node = x[0] y_node = y[0] if x_node == self.src_id: diff --git a/daemon/core/scripts/serviceupdate.py b/daemon/scripts/core-service-update similarity index 50% rename from daemon/core/scripts/serviceupdate.py rename to daemon/scripts/core-service-update index 50ada96d..d0ca863f 100755 --- a/daemon/core/scripts/serviceupdate.py +++ b/daemon/scripts/core-service-update @@ -1,3 +1,4 @@ +#!/usr/bin/env python3 import argparse import re from io import TextIOWrapper @@ -5,15 +6,9 @@ from io import TextIOWrapper def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser( - description="Helps transition older CORE services to work with newer versions" - ) - parser.add_argument( - "-f", - "--file", - dest="file", - type=argparse.FileType("r"), - help="service file to update", - ) + description=f"Helps transition older CORE services to work with newer versions") + parser.add_argument("-f", "--file", dest="file", type=argparse.FileType("r"), + help=f"service file to update") return parser.parse_args() @@ -25,32 +20,17 @@ def update_service(service_file: TextIOWrapper) -> None: # rename dirs to directories line = re.sub(r"^(\s+)dirs", r"\1directories", line) # fix import states for service - line = re.sub( - r"^.+import.+CoreService.+$", - r"from core.services.coreservices import CoreService", - line, - ) + line = re.sub(r"^.+import.+CoreService.+$", + r"from core.services.coreservices import CoreService", line) # fix method signatures - line = re.sub( - r"def generateconfig\(cls, node, filename, services\)", - r"def generate_config(cls, node, filename)", - line, - ) - line = re.sub( - r"def getvalidate\(cls, node, services\)", - r"def get_validate(cls, node)", - line, - ) - line = re.sub( - r"def getstartup\(cls, node, services\)", - r"def get_startup(cls, node)", - line, - ) - line = re.sub( - r"def getconfigfilenames\(cls, nodenum, services\)", - r"def get_configs(cls, node)", - line, - ) + line = re.sub(r"def generateconfig\(cls, node, filename, services\)", + r"def generate_config(cls, node, filename)", line) + line = re.sub(r"def getvalidate\(cls, node, services\)", + r"def get_validate(cls, node)", line) + line = re.sub(r"def getstartup\(cls, node, services\)", + r"def get_startup(cls, node)", line) + line = re.sub(r"def getconfigfilenames\(cls, nodenum, services\)", + r"def get_configs(cls, node)", line) # remove unwanted lines if re.search(r"addservice\(", line): continue diff --git a/daemon/scripts/coresendmsg b/daemon/scripts/coresendmsg new file mode 100755 index 00000000..13e20b5c --- /dev/null +++ b/daemon/scripts/coresendmsg @@ -0,0 +1,279 @@ +#!/usr/bin/env python3 +""" +coresendmsg: utility for generating CORE messages +""" + +import optparse +import os +import socket +import sys + +from core.api.tlv import coreapi +from core.api.tlv.enumerations import CORE_API_PORT, MessageTypes, SessionTlvs +from core.emulator.enumerations import MessageFlags + + +def print_available_tlvs(t, tlv_class): + """ + Print a TLV list. + """ + print(f"TLVs available for {t} message:") + for tlv in sorted([tlv for tlv in tlv_class.tlv_type_map], key=lambda x: x.name): + print(tlv.name.lower()) + + +def print_examples(name): + """ + Print example usage of this script. + """ + examples = [ + ("node number=3 x_position=125 y_position=525", + "move node number 3 to x,y=(125,525)"), + ("node number=4 icon=/usr/local/share/core/icons/normal/router_red.gif", + "change node number 4\"s icon to red"), + ("node flags=add number=5 type=0 name=\"n5\" x_position=500 y_position=500", + "add a new router node n5"), + ("link n1_number=2 n2_number=3 delay=15000", + "set a 15ms delay on the link between n2 and n3"), + ("link n1_number=2 n2_number=3 gui_attributes=\"color=blue\"", + "change the color of the link between n2 and n3"), + ("link flags=add n1_number=4 n2_number=5 interface1_ip4=\"10.0.3.2\" " + "interface1_ip4_mask=24 interface2_ip4=\"10.0.3.1\" interface2_ip4_mask=24", + "link node n5 with n4 using the given interface addresses"), + ("execute flags=string,text node=1 number=1000 command=\"uname -a\" -l", + "run a command on node 1 and wait for the result"), + ("execute node=2 number=1001 command=\"killall ospfd\"", + "run a command on node 2 and ignore the result"), + ("file flags=add node=1 name=\"/var/log/test.log\" data=\"hello world.\"", + "write a test.log file on node 1 with the given contents"), + ("file flags=add node=2 name=\"test.log\" source_name=\"./test.log\"", + "move a test.log file from host to node 2"), + ] + print(f"Example {name} invocations:") + for cmd, descr in examples: + print(f" {name} {cmd}\n\t\t{descr}") + + +def receive_message(sock): + """ + Retrieve a message from a socket and return the CoreMessage object or + None upon disconnect. Socket data beyond the first message is dropped. + """ + try: + # large receive buffer used for UDP sockets, instead of just receiving + # the 4-byte header + data = sock.recv(4096) + msghdr = data[:coreapi.CoreMessage.header_len] + except KeyboardInterrupt: + print("CTRL+C pressed") + sys.exit(1) + + if len(msghdr) == 0: + return None + + msgdata = None + msgtype, msgflags, msglen = coreapi.CoreMessage.unpack_header(msghdr) + + if msglen: + msgdata = data[coreapi.CoreMessage.header_len:] + try: + msgcls = coreapi.CLASS_MAP[msgtype] + except KeyError: + msg = coreapi.CoreMessage(msgflags, msghdr, msgdata) + msg.message_type = msgtype + print(f"unimplemented CORE message type: {msg.type_str()}") + return msg + if len(data) > msglen + coreapi.CoreMessage.header_len: + data_size = len(data) - (msglen + coreapi.CoreMessage.header_len) + print(f"received a message of type {msgtype}, dropping {data_size} bytes of extra data") + return msgcls(msgflags, msghdr, msgdata) + + +def connect_to_session(sock, requested): + """ + Use Session Messages to retrieve the current list of sessions and + connect to the first one. + """ + # request the session list + tlvdata = coreapi.CoreSessionTlv.pack(SessionTlvs.NUMBER.value, "") + flags = MessageFlags.STRING.value + smsg = coreapi.CoreSessionMessage.pack(flags, tlvdata) + sock.sendall(smsg) + + print("waiting for session list...") + smsgreply = receive_message(sock) + if smsgreply is None: + print("disconnected") + return False + + sessstr = smsgreply.get_tlv(SessionTlvs.NUMBER.value) + if sessstr is None: + print("missing session numbers") + return False + + # join the first session (that is not our own connection) + tmp, localport = sock.getsockname() + sessions = sessstr.split("|") + sessions.remove(str(localport)) + if len(sessions) == 0: + print("no sessions to join") + return False + + if not requested: + session = sessions[0] + elif requested in sessions: + session = requested + else: + print("requested session not found!") + return False + + print(f"joining session: {session}") + tlvdata = coreapi.CoreSessionTlv.pack(SessionTlvs.NUMBER.value, session) + flags = MessageFlags.ADD.value + smsg = coreapi.CoreSessionMessage.pack(flags, tlvdata) + sock.sendall(smsg) + return True + + +def receive_response(sock, opt): + """ + Receive and print a CORE message from the given socket. + """ + print("waiting for response...") + msg = receive_message(sock) + if msg is None: + print(f"disconnected from {opt.address}:{opt.port}") + sys.exit(0) + print(f"received message: {msg}") + + +def main(): + """ + Parse command-line arguments to build and send a CORE message. + """ + types = [message_type.name.lower() for message_type in MessageTypes] + flags = [flag.name.lower() for flag in MessageFlags] + types_usage = " ".join(types) + flags_usage = " ".join(flags) + usagestr = ( + "usage: %prog [-h|-H] [options] [message-type] [flags=flags] " + "[message-TLVs]\n\n" + f"Supported message types:\n {types_usage}\n" + f"Supported message flags (flags=f1,f2,...):\n {flags_usage}" + ) + parser = optparse.OptionParser(usage=usagestr) + default_address = "localhost" + default_session = None + default_tcp = False + parser.set_defaults( + port=CORE_API_PORT, + address=default_address, + session=default_session, + listen=False, + examples=False, + tlvs=False, + tcp=default_tcp + ) + parser.add_option("-H", dest="examples", action="store_true", + help="show example usage help message and exit") + parser.add_option("-p", "--port", dest="port", type=int, + help=f"TCP port to connect to, default: {CORE_API_PORT}") + parser.add_option("-a", "--address", dest="address", type=str, + help=f"Address to connect to, default: {default_address}") + parser.add_option("-s", "--session", dest="session", type=str, + help=f"Session to join, default: {default_session}") + parser.add_option("-l", "--listen", dest="listen", action="store_true", + help="Listen for a response message and print it.") + parser.add_option("-t", "--list-tlvs", dest="tlvs", action="store_true", + help="List TLVs for the specified message type.") + parser.add_option("--tcp", dest="tcp", action="store_true", + help=f"Use TCP instead of UDP and connect to a session default: {default_tcp}") + + def usage(msg=None, err=0): + print() + if msg: + print(f"{msg}\n") + parser.print_help() + sys.exit(err) + + # parse command line opt + opt, args = parser.parse_args() + if opt.examples: + print_examples(os.path.basename(sys.argv[0])) + sys.exit(0) + if len(args) == 0: + usage("Please specify a message type to send.") + + # given a message type t, determine the message and TLV classes + t = args.pop(0) + t = t.lower() + if t not in types: + usage(f"Unknown message type requested: {t}") + message_type = MessageTypes[t.upper()] + msg_cls = coreapi.CLASS_MAP[message_type.value] + tlv_cls = msg_cls.tlv_class + + # list TLV types for this message type + if opt.tlvs: + print_available_tlvs(t, tlv_cls) + sys.exit(0) + + # build a message consisting of TLVs from "type=value" arguments + flagstr = "" + tlvdata = b"" + for a in args: + typevalue = a.split("=") + if len(typevalue) < 2: + usage(f"Use \"type=value\" syntax instead of \"{a}\".") + tlv_typestr = typevalue[0].lower() + tlv_valstr = "=".join(typevalue[1:]) + if tlv_typestr == "flags": + flagstr = tlv_valstr + continue + try: + tlv_type = tlv_cls.tlv_type_map[tlv_typestr.upper()] + tlvdata += tlv_cls.pack_string(tlv_type.value, tlv_valstr) + except KeyError: + usage(f"Unknown TLV: \"{tlv_typestr}\"") + + flags = 0 + for f in flagstr.split(","): + if f == "": + continue + try: + flag_enum = MessageFlags[f.upper()] + n = flag_enum.value + flags |= n + except KeyError: + usage(f"Invalid flag \"{f}\".") + + msg = msg_cls.pack(flags, tlvdata) + + if opt.tcp: + protocol = socket.SOCK_STREAM + else: + protocol = socket.SOCK_DGRAM + + sock = socket.socket(socket.AF_INET, protocol) + sock.setblocking(True) + + try: + sock.connect((opt.address, opt.port)) + except Exception as e: + print(f"Error connecting to {opt.address}:{opt.port}:\n\t{e}") + sys.exit(1) + + if opt.tcp and not connect_to_session(sock, opt.session): + print("warning: continuing without joining a session!") + + sock.sendall(msg) + if opt.listen: + receive_response(sock, opt) + if opt.tcp: + sock.shutdown(socket.SHUT_RDWR) + sock.close() + sys.exit(0) + + +if __name__ == "__main__": + main() diff --git a/daemon/tests/conftest.py b/daemon/tests/conftest.py index b668fb07..a558fcec 100644 --- a/daemon/tests/conftest.py +++ b/daemon/tests/conftest.py @@ -7,9 +7,11 @@ import time import mock import pytest +from mock.mock import MagicMock from core.api.grpc.client import InterfaceHelper from core.api.grpc.server import CoreGrpcServer +from core.api.tlv.corehandlers import CoreHandler from core.emulator.coreemu import CoreEmu from core.emulator.data import IpPrefixes from core.emulator.distributed import DistributedServer @@ -58,7 +60,9 @@ def patcher(request): patch_manager.patch_obj( LinuxNetClient, "get_mac", return_value="00:00:00:00:00:00" ) - patch_manager.patch_obj(CoreNode, "create_file") + patch_manager.patch_obj(CoreNode, "nodefile") + patch_manager.patch_obj(Session, "write_state") + patch_manager.patch_obj(Session, "write_nodes") yield patch_manager patch_manager.shutdown() @@ -74,7 +78,6 @@ def global_coreemu(patcher): def global_session(request, patcher, global_coreemu): mkdir = not request.config.getoption("mock") session = Session(1000, {"emane_prefix": "/usr"}, mkdir) - session.service_manager = global_coreemu.service_manager yield session session.shutdown() @@ -100,6 +103,17 @@ def module_grpc(global_coreemu): grpc_server.server.stop(None) +@pytest.fixture(scope="module") +def module_coretlv(patcher, global_coreemu, global_session): + request_mock = MagicMock() + request_mock.fileno = MagicMock(return_value=1) + server = MockServer(global_coreemu) + request_handler = CoreHandler(request_mock, "", server) + request_handler.session = global_session + request_handler.add_session_handlers() + yield request_handler + + @pytest.fixture def grpc_server(module_grpc): yield module_grpc @@ -115,6 +129,16 @@ def session(global_session): global_session.clear() +@pytest.fixture +def coretlv(module_coretlv): + session = module_coretlv.session + session.set_state(EventTypes.CONFIGURATION_STATE) + coreemu = module_coretlv.coreemu + coreemu.sessions[session.id] = session + yield module_coretlv + coreemu.shutdown() + + def pytest_addoption(parser): parser.addoption("--distributed", help="distributed server address") parser.addoption("--mock", action="store_true", help="run without mocking") diff --git a/daemon/tests/emane/test_emane.py b/daemon/tests/emane/test_emane.py index 2ddb1a5d..ccbfb446 100644 --- a/daemon/tests/emane/test_emane.py +++ b/daemon/tests/emane/test_emane.py @@ -1,7 +1,7 @@ """ Unit tests for testing CORE EMANE networks. """ -from pathlib import Path +import os from tempfile import TemporaryFile from typing import Type from xml.etree import ElementTree @@ -9,17 +9,17 @@ from xml.etree import ElementTree import pytest from core import utils +from core.emane.bypass import EmaneBypassModel +from core.emane.commeffect import EmaneCommEffectModel from core.emane.emanemodel import EmaneModel -from core.emane.models.bypass import EmaneBypassModel -from core.emane.models.commeffect import EmaneCommEffectModel -from core.emane.models.ieee80211abg import EmaneIeee80211abgModel -from core.emane.models.rfpipe import EmaneRfPipeModel -from core.emane.models.tdma import EmaneTdmaModel +from core.emane.ieee80211abg import EmaneIeee80211abgModel from core.emane.nodes import EmaneNet -from core.emulator.data import IpPrefixes +from core.emane.rfpipe import EmaneRfPipeModel +from core.emane.tdma import EmaneTdmaModel +from core.emulator.data import IpPrefixes, NodeOptions from core.emulator.session import Session from core.errors import CoreCommandError, CoreError -from core.nodes.base import CoreNode, Position +from core.nodes.base import CoreNode _EMANE_MODELS = [ EmaneIeee80211abgModel, @@ -28,8 +28,7 @@ _EMANE_MODELS = [ EmaneCommEffectModel, EmaneTdmaModel, ] -_DIR: Path = Path(__file__).resolve().parent -_SCHEDULE: Path = _DIR / "../../examples/tdma/schedule.xml" +_DIR = os.path.dirname(os.path.abspath(__file__)) def ping( @@ -53,22 +52,19 @@ class TestEmane: """ # create emane node for networking the core nodes session.set_location(47.57917, -122.13232, 2.00000, 1.0) - options = EmaneNet.create_options() - options.emane_model = EmaneIeee80211abgModel.name - position = Position(x=80, y=50) - emane_net1 = session.add_node(EmaneNet, position=position, options=options) - options = EmaneNet.create_options() - options.emane_model = EmaneRfPipeModel.name - position = Position(x=80, y=50) - emane_net2 = session.add_node(EmaneNet, position=position, options=options) + options = NodeOptions() + options.set_position(80, 50) + options.emane = EmaneIeee80211abgModel.name + emane_net1 = session.add_node(EmaneNet, options=options) + options.emane = EmaneRfPipeModel.name + emane_net2 = session.add_node(EmaneNet, options=options) # create nodes - options = CoreNode.create_options() - options.model = "mdr" - position = Position(x=150, y=150) - node1 = session.add_node(CoreNode, position=position, options=options) - position = Position(x=300, y=150) - node2 = session.add_node(CoreNode, position=position, options=options) + options = NodeOptions(model="mdr") + options.set_position(150, 150) + node1 = session.add_node(CoreNode, options=options) + options.set_position(300, 150) + node2 = session.add_node(CoreNode, options=options) # create interfaces ip_prefix1 = IpPrefixes("10.0.0.0/24") @@ -103,24 +99,25 @@ class TestEmane: # create emane node for networking the core nodes session.set_location(47.57917, -122.13232, 2.00000, 1.0) - options = EmaneNet.create_options() - options.emane_model = model.name - position = Position(x=80, y=50) - emane_network = session.add_node(EmaneNet, position=position, options=options) + options = NodeOptions() + options.set_position(80, 50) + emane_network = session.add_node(EmaneNet, options=options) + session.emane.set_model(emane_network, model) # configure tdma if model == EmaneTdmaModel: - session.emane.set_config( - emane_network.id, EmaneTdmaModel.name, {"schedule": str(_SCHEDULE)} + session.emane.set_model_config( + emane_network.id, + EmaneTdmaModel.name, + {"schedule": os.path.join(_DIR, "../../examples/tdma/schedule.xml")}, ) # create nodes - options = CoreNode.create_options() - options.model = "mdr" - position = Position(x=150, y=150) - node1 = session.add_node(CoreNode, position=position, options=options) - position = Position(x=300, y=150) - node2 = session.add_node(CoreNode, position=position, options=options) + options = NodeOptions(model="mdr") + options.set_position(150, 150) + node1 = session.add_node(CoreNode, options=options) + options.set_position(300, 150) + node2 = session.add_node(CoreNode, options=options) for i, node in enumerate([node1, node2]): node.setposition(x=150 * (i + 1), y=150) @@ -146,23 +143,21 @@ class TestEmane: """ # create emane node for networking the core nodes session.set_location(47.57917, -122.13232, 2.00000, 1.0) - options = EmaneNet.create_options() - options.emane_model = EmaneIeee80211abgModel.name - position = Position(x=80, y=50) - emane_network = session.add_node(EmaneNet, position=position, options=options) + options = NodeOptions() + options.set_position(80, 50) + emane_network = session.add_node(EmaneNet, options=options) config_key = "txpower" config_value = "10" - session.emane.set_config( - emane_network.id, EmaneIeee80211abgModel.name, {config_key: config_value} + session.emane.set_model( + emane_network, EmaneIeee80211abgModel, {config_key: config_value} ) # create nodes - options = CoreNode.create_options() - options.model = "mdr" - position = Position(x=150, y=150) - node1 = session.add_node(CoreNode, position=position, options=options) - position = Position(x=300, y=150) - node2 = session.add_node(CoreNode, position=position, options=options) + options = NodeOptions(model="mdr") + options.set_position(150, 150) + node1 = session.add_node(CoreNode, options=options) + options.set_position(300, 150) + node2 = session.add_node(CoreNode, options=options) for i, node in enumerate([node1, node2]): node.setposition(x=150 * (i + 1), y=150) @@ -180,7 +175,7 @@ class TestEmane: # save xml xml_file = tmpdir.join("session.xml") file_path = xml_file.strpath - session.save_xml(Path(file_path)) + session.save_xml(file_path) # verify xml file was created and can be parsed assert xml_file.isfile() @@ -196,11 +191,12 @@ class TestEmane: assert not session.get_node(node2_id, CoreNode) # load saved xml - session.open_xml(Path(file_path), start=True) + session.open_xml(file_path, start=True) # retrieve configuration we set originally - config = session.emane.get_config(emane_id, EmaneIeee80211abgModel.name) - value = config[config_key] + value = str( + session.emane.get_config(config_key, emane_id, EmaneIeee80211abgModel.name) + ) # verify nodes and configuration were restored assert session.get_node(node1_id, CoreNode) @@ -212,26 +208,23 @@ class TestEmane: self, session: Session, tmpdir: TemporaryFile, ip_prefixes: IpPrefixes ): # create nodes - options = CoreNode.create_options() - options.model = "mdr" - position = Position(x=50, y=50) - node1 = session.add_node(CoreNode, position=position, options=options) + options = NodeOptions(model="mdr", x=50, y=50) + node1 = session.add_node(CoreNode, options=options) iface1_data = ip_prefixes.create_iface(node1) - node2 = session.add_node(CoreNode, position=position, options=options) + node2 = session.add_node(CoreNode, options=options) iface2_data = ip_prefixes.create_iface(node2) # create emane node - options = EmaneNet.create_options() - options.emane_model = EmaneRfPipeModel.name + options = NodeOptions(model=None, emane=EmaneRfPipeModel.name) emane_node = session.add_node(EmaneNet, options=options) # create links session.add_link(node1.id, emane_node.id, iface1_data) session.add_link(node2.id, emane_node.id, iface2_data) - # set node specific config + # set node specific conifg datarate = "101" - session.emane.set_config( + session.emane.set_model_config( node1.id, EmaneRfPipeModel.name, {"datarate": datarate} ) @@ -241,7 +234,7 @@ class TestEmane: # save xml xml_file = tmpdir.join("session.xml") file_path = xml_file.strpath - session.save_xml(Path(file_path)) + session.save_xml(file_path) # verify xml file was created and can be parsed assert xml_file.isfile() @@ -259,31 +252,32 @@ class TestEmane: assert not session.get_node(emane_node.id, EmaneNet) # load saved xml - session.open_xml(Path(file_path), start=True) + session.open_xml(file_path, start=True) # verify nodes have been recreated assert session.get_node(node1.id, CoreNode) assert session.get_node(node2.id, CoreNode) assert session.get_node(emane_node.id, EmaneNet) - assert len(session.link_manager.links()) == 2 - config = session.emane.get_config(node1.id, EmaneRfPipeModel.name) + links = [] + for node_id in session.nodes: + node = session.nodes[node_id] + links += node.links() + assert len(links) == 2 + config = session.emane.get_model_config(node1.id, EmaneRfPipeModel.name) assert config["datarate"] == datarate def test_xml_emane_interface_config( self, session: Session, tmpdir: TemporaryFile, ip_prefixes: IpPrefixes ): # create nodes - options = CoreNode.create_options() - options.model = "mdr" - position = Position(x=50, y=50) - node1 = session.add_node(CoreNode, position=position, options=options) + options = NodeOptions(model="mdr", x=50, y=50) + node1 = session.add_node(CoreNode, options=options) iface1_data = ip_prefixes.create_iface(node1) - node2 = session.add_node(CoreNode, position=position, options=options) + node2 = session.add_node(CoreNode, options=options) iface2_data = ip_prefixes.create_iface(node2) # create emane node - options = EmaneNet.create_options() - options.emane_model = EmaneRfPipeModel.name + options = NodeOptions(model=None, emane=EmaneRfPipeModel.name) emane_node = session.add_node(EmaneNet, options=options) # create links @@ -293,7 +287,7 @@ class TestEmane: # set node specific conifg datarate = "101" config_id = utils.iface_config_id(node1.id, iface1_data.id) - session.emane.set_config( + session.emane.set_model_config( config_id, EmaneRfPipeModel.name, {"datarate": datarate} ) @@ -303,7 +297,7 @@ class TestEmane: # save xml xml_file = tmpdir.join("session.xml") file_path = xml_file.strpath - session.save_xml(Path(file_path)) + session.save_xml(file_path) # verify xml file was created and can be parsed assert xml_file.isfile() @@ -321,12 +315,16 @@ class TestEmane: assert not session.get_node(emane_node.id, EmaneNet) # load saved xml - session.open_xml(Path(file_path), start=True) + session.open_xml(file_path, start=True) # verify nodes have been recreated assert session.get_node(node1.id, CoreNode) assert session.get_node(node2.id, CoreNode) assert session.get_node(emane_node.id, EmaneNet) - assert len(session.link_manager.links()) == 2 - config = session.emane.get_config(config_id, EmaneRfPipeModel.name) + links = [] + for node_id in session.nodes: + node = session.nodes[node_id] + links += node.links() + assert len(links) == 2 + config = session.emane.get_model_config(config_id, EmaneRfPipeModel.name) assert config["datarate"] == datarate diff --git a/daemon/tests/test_conf.py b/daemon/tests/test_conf.py index 2c74841d..e90acfbd 100644 --- a/daemon/tests/test_conf.py +++ b/daemon/tests/test_conf.py @@ -1,12 +1,13 @@ import pytest from core.config import ( - ConfigString, ConfigurableManager, ConfigurableOptions, + Configuration, ModelManager, ) -from core.emane.models.ieee80211abg import EmaneIeee80211abgModel +from core.emane.ieee80211abg import EmaneIeee80211abgModel +from core.emulator.enumerations import ConfigDataTypes from core.emulator.session import Session from core.location.mobility import BasicRangeModel from core.nodes.network import WlanNode @@ -15,7 +16,10 @@ from core.nodes.network import WlanNode class TestConfigurableOptions(ConfigurableOptions): name1 = "value1" name2 = "value2" - options = [ConfigString(id=name1, label=name1), ConfigString(id=name2, label=name2)] + options = [ + Configuration(_id=name1, _type=ConfigDataTypes.STRING, label=name1), + Configuration(_id=name2, _type=ConfigDataTypes.STRING, label=name2), + ] class TestConf: diff --git a/daemon/tests/test_config_services.py b/daemon/tests/test_config_services.py index 876b7f32..eaba4d47 100644 --- a/daemon/tests/test_config_services.py +++ b/daemon/tests/test_config_services.py @@ -1,14 +1,14 @@ -from pathlib import Path from unittest import mock import pytest -from core.config import ConfigBool, ConfigString +from core.config import Configuration from core.configservice.base import ( ConfigService, ConfigServiceBootError, ConfigServiceMode, ) +from core.emulator.enumerations import ConfigDataTypes from core.errors import CoreCommandError, CoreError TEMPLATE_TEXT = "echo hello" @@ -26,10 +26,13 @@ class MyService(ConfigService): shutdown = [f"pkill {files[0]}"] validation_mode = ConfigServiceMode.BLOCKING default_configs = [ - ConfigString(id="value1", label="Text"), - ConfigBool(id="value2", label="Boolean"), - ConfigString( - id="value3", label="Multiple Choice", options=["value1", "value2", "value3"] + Configuration(_id="value1", _type=ConfigDataTypes.STRING, label="Text"), + Configuration(_id="value2", _type=ConfigDataTypes.BOOL, label="Boolean"), + Configuration( + _id="value3", + _type=ConfigDataTypes.STRING, + label="Multiple Choice", + options=["value1", "value2", "value3"], ), ] modes = { @@ -65,8 +68,7 @@ class TestConfigServices: service.create_dirs() # then - directory = Path(MyService.directories[0]) - node.create_dir.assert_called_with(directory) + node.privatedir.assert_called_with(MyService.directories[0]) def test_create_files_custom(self): # given @@ -79,8 +81,7 @@ class TestConfigServices: service.create_files() # then - file_path = Path(MyService.files[0]) - node.create_file.assert_called_with(file_path, text) + node.nodefile.assert_called_with(MyService.files[0], text) def test_create_files_text(self): # given @@ -91,8 +92,7 @@ class TestConfigServices: service.create_files() # then - file_path = Path(MyService.files[0]) - node.create_file.assert_called_with(file_path, TEMPLATE_TEXT) + node.nodefile.assert_called_with(MyService.files[0], TEMPLATE_TEXT) def test_run_startup(self): # given diff --git a/daemon/tests/test_core.py b/daemon/tests/test_core.py index 919e4478..c4465863 100644 --- a/daemon/tests/test_core.py +++ b/daemon/tests/test_core.py @@ -2,22 +2,23 @@ Unit tests for testing basic CORE networks. """ +import os import threading -from pathlib import Path -from typing import List, Type +from typing import Type import pytest -from core.emulator.data import IpPrefixes +from core.emulator.data import IpPrefixes, NodeOptions +from core.emulator.enumerations import MessageFlags from core.emulator.session import Session from core.errors import CoreCommandError from core.location.mobility import BasicRangeModel, Ns2ScriptedMobility from core.nodes.base import CoreNode, NodeBase from core.nodes.network import HubNode, PtpNet, SwitchNode, WlanNode -_PATH: Path = Path(__file__).resolve().parent -_MOBILITY_FILE: Path = _PATH / "mobility.scen" -_WIRED: List = [PtpNet, HubNode, SwitchNode] +_PATH = os.path.abspath(os.path.dirname(__file__)) +_MOBILITY_FILE = os.path.join(_PATH, "mobility.scen") +_WIRED = [PtpNet, HubNode, SwitchNode] def ping(from_node: CoreNode, to_node: CoreNode, ip_prefixes: IpPrefixes): @@ -62,6 +63,83 @@ class TestCore: status = ping(node1, node2, ip_prefixes) assert not status + def test_vnode_client(self, request, session: Session, ip_prefixes: IpPrefixes): + """ + Test vnode client methods. + + :param request: pytest request + :param session: session for test + :param ip_prefixes: generates ip addresses for nodes + """ + # create ptp + ptp_node = session.add_node(PtpNet) + + # create nodes + node1 = session.add_node(CoreNode) + node2 = session.add_node(CoreNode) + + # link nodes to ptp net + for node in [node1, node2]: + iface_data = ip_prefixes.create_iface(node) + session.add_link(node.id, ptp_node.id, iface1_data=iface_data) + + # get node client for testing + client = node1.client + + # instantiate session + session.instantiate() + + # check we are connected + assert client.connected() + + # validate command + if not request.config.getoption("mock"): + assert client.check_cmd("echo hello") == "hello" + + def test_iface(self, session: Session, ip_prefixes: IpPrefixes): + """ + Test interface methods. + + :param session: session for test + :param ip_prefixes: generates ip addresses for nodes + """ + + # create ptp + ptp_node = session.add_node(PtpNet) + + # create nodes + node1 = session.add_node(CoreNode) + node2 = session.add_node(CoreNode) + + # link nodes to ptp net + for node in [node1, node2]: + iface = ip_prefixes.create_iface(node) + session.add_link(node.id, ptp_node.id, iface1_data=iface) + + # instantiate session + session.instantiate() + + # check link data gets generated + assert ptp_node.links(MessageFlags.ADD) + + # check common nets exist between linked nodes + assert node1.commonnets(node2) + assert node2.commonnets(node1) + + # check we can retrieve interface id + assert 0 in node1.ifaces + assert 0 in node2.ifaces + + # check interface parameters + iface = node1.get_iface(0) + iface.setparam("test", 1) + assert iface.getparam("test") == 1 + assert iface.getparams() + + # delete interface and test that if no longer exists + node1.delete_iface(0) + assert 0 not in node1.ifaces + def test_wlan_ping(self, session: Session, ip_prefixes: IpPrefixes): """ Test basic wlan network. @@ -75,8 +153,8 @@ class TestCore: session.mobility.set_model(wlan_node, BasicRangeModel) # create nodes - options = CoreNode.create_options() - options.model = "mdr" + options = NodeOptions(model="mdr") + options.set_position(0, 0) node1 = session.add_node(CoreNode, options=options) node2 = session.add_node(CoreNode, options=options) @@ -105,8 +183,8 @@ class TestCore: session.mobility.set_model(wlan_node, BasicRangeModel) # create nodes - options = CoreNode.create_options() - options.model = "mdr" + options = NodeOptions(model="mdr") + options.set_position(0, 0) node1 = session.add_node(CoreNode, options=options) node2 = session.add_node(CoreNode, options=options) @@ -117,7 +195,7 @@ class TestCore: # configure mobility script for session config = { - "file": str(_MOBILITY_FILE), + "file": _MOBILITY_FILE, "refresh_ms": "50", "loop": "1", "autostart": "0.0", diff --git a/daemon/tests/test_distributed.py b/daemon/tests/test_distributed.py index 3a9d43fb..01362cae 100644 --- a/daemon/tests/test_distributed.py +++ b/daemon/tests/test_distributed.py @@ -1,3 +1,4 @@ +from core.emulator.data import NodeOptions from core.emulator.session import Session from core.nodes.base import CoreNode from core.nodes.network import HubNode @@ -11,7 +12,8 @@ class TestDistributed: # when session.distributed.add_server(server_name, host) - node = session.add_node(CoreNode, server=server_name) + options = NodeOptions(server=server_name) + node = session.add_node(CoreNode, options=options) session.instantiate() # then @@ -27,13 +29,12 @@ class TestDistributed: # when session.distributed.add_server(server_name, host) - node1 = session.add_node(HubNode) - node2 = session.add_node(HubNode, server=server_name) - session.add_link(node1.id, node2.id) + options = NodeOptions(server=server_name) + node = session.add_node(HubNode, options=options) session.instantiate() # then - assert node2.server is not None - assert node2.server.name == server_name - assert node2.server.host == host - assert len(session.distributed.tunnels) == 1 + assert node.server is not None + assert node.server.name == server_name + assert node.server.host == host + assert len(session.distributed.tunnels) > 0 diff --git a/daemon/tests/test_grpc.py b/daemon/tests/test_grpc.py index 9aed3395..a4efd6d9 100644 --- a/daemon/tests/test_grpc.py +++ b/daemon/tests/test_grpc.py @@ -1,5 +1,4 @@ import time -from pathlib import Path from queue import Queue from tempfile import TemporaryFile from typing import Optional @@ -8,34 +7,19 @@ import grpc import pytest from mock import patch -from core.api.grpc import wrappers -from core.api.grpc.client import CoreGrpcClient, InterfaceHelper, MoveNodesStreamer +from core.api.grpc import core_pb2 +from core.api.grpc.client import CoreGrpcClient, InterfaceHelper +from core.api.grpc.emane_pb2 import EmaneModelConfig +from core.api.grpc.mobility_pb2 import MobilityAction, MobilityConfig from core.api.grpc.server import CoreGrpcServer -from core.api.grpc.wrappers import ( - ConfigOption, - ConfigOptionType, - EmaneModelConfig, - Event, - Geo, - Hook, - Interface, - Link, - LinkOptions, - MobilityAction, - MoveNodesRequest, - Node, - NodeServiceData, - NodeType, - Position, - ServiceAction, - ServiceValidationMode, - SessionLocation, - SessionState, -) -from core.emane.models.ieee80211abg import EmaneIeee80211abgModel +from core.api.grpc.services_pb2 import ServiceAction, ServiceConfig, ServiceFileConfig +from core.api.grpc.wlan_pb2 import WlanConfig +from core.api.tlv.dataconversion import ConfigShim +from core.api.tlv.enumerations import ConfigFlags +from core.emane.ieee80211abg import EmaneIeee80211abgModel from core.emane.nodes import EmaneNet -from core.emulator.data import EventData, IpPrefixes, NodeData -from core.emulator.enumerations import EventTypes, ExceptionLevels, MessageFlags +from core.emulator.data import EventData, IpPrefixes, NodeData, NodeOptions +from core.emulator.enumerations import EventTypes, ExceptionLevels, NodeTypes from core.errors import CoreError from core.location.mobility import BasicRangeModel, Ns2ScriptedMobility from core.nodes.base import CoreNode @@ -44,27 +28,36 @@ from core.xml.corexml import CoreXmlWriter class TestGrpc: - @pytest.mark.parametrize("definition", [False, True]) - def test_start_session(self, grpc_server: CoreGrpcServer, definition): + def test_start_session(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() - with client.context_connect(): - session = client.create_session() - position = Position(x=50, y=100) - node1 = session.add_node(1, position=position) - position = Position(x=100, y=100) - node2 = session.add_node(2, position=position) - position = Position(x=200, y=200) - wlan_node = session.add_node(3, _type=NodeType.WIRELESS_LAN, position=position) + session = grpc_server.coreemu.create_session() + position = core_pb2.Position(x=50, y=100) + node1 = core_pb2.Node(id=1, position=position, model="PC") + position = core_pb2.Position(x=100, y=100) + node2 = core_pb2.Node(id=2, position=position, model="PC") + position = core_pb2.Position(x=200, y=200) + wlan_node = core_pb2.Node( + id=3, type=NodeTypes.WIRELESS_LAN.value, position=position + ) + nodes = [node1, node2, wlan_node] iface_helper = InterfaceHelper(ip4_prefix="10.83.0.0/16") iface1_id = 0 iface1 = iface_helper.create_iface(node1.id, iface1_id) iface2_id = 0 iface2 = iface_helper.create_iface(node2.id, iface2_id) - link = Link(node1_id=node1.id, node2_id=node2.id, iface1=iface1, iface2=iface2) - session.links = [link] - hook = Hook(state=SessionState.RUNTIME, file="echo.sh", data="echo hello") - session.hooks = {hook.file: hook} + link = core_pb2.Link( + type=core_pb2.LinkType.WIRED, + node1_id=node1.id, + node2_id=node2.id, + iface1=iface1, + iface2=iface2, + ) + links = [link] + hook = core_pb2.Hook( + state=core_pb2.SessionState.RUNTIME, file="echo.sh", data="echo hello" + ) + hooks = [hook] location_x = 5 location_y = 10 location_z = 15 @@ -72,7 +65,7 @@ class TestGrpc: location_lon = 30 location_alt = 40 location_scale = 5 - session.location = SessionLocation( + location = core_pb2.SessionLocation( x=location_x, y=location_y, z=location_z, @@ -81,88 +74,93 @@ class TestGrpc: alt=location_alt, scale=location_scale, ) - - # setup wlan config + emane_config_key = "platform_id_start" + emane_config_value = "2" + emane_config = {emane_config_key: emane_config_value} + model_node_id = 20 + model_config_key = "bandwidth" + model_config_value = "500000" + model_config = EmaneModelConfig( + node_id=model_node_id, + iface_id=-1, + model=EmaneIeee80211abgModel.name, + config={model_config_key: model_config_value}, + ) + model_configs = [model_config] wlan_config_key = "range" wlan_config_value = "333" - wlan_node.set_wlan({wlan_config_key: wlan_config_value}) - - # setup mobility config + wlan_config = WlanConfig( + node_id=wlan_node.id, config={wlan_config_key: wlan_config_value} + ) + wlan_configs = [wlan_config] mobility_config_key = "refresh_ms" mobility_config_value = "60" - wlan_node.set_mobility({mobility_config_key: mobility_config_value}) - - # setup service config - service_name = "DefaultRoute" - service_validate = ["echo hello"] - node1.service_configs[service_name] = NodeServiceData( - executables=[], - dependencies=[], - dirs=[], - configs=[], - startup=[], - validate=service_validate, - validation_mode=ServiceValidationMode.NON_BLOCKING, - validation_timer=0, - shutdown=[], - meta="", + mobility_config = MobilityConfig( + node_id=wlan_node.id, config={mobility_config_key: mobility_config_value} ) - - # setup service file config - service_file = "defaultroute.sh" - service_file_data = "echo hello" - node1.service_file_configs[service_name] = {service_file: service_file_data} - - # setup session option - option_key = "controlnet" - option_value = "172.16.0.0/24" - session.set_options({option_key: option_value}) + mobility_configs = [mobility_config] + service_config = ServiceConfig( + node_id=node1.id, service="DefaultRoute", validate=["echo hello"] + ) + service_configs = [service_config] + service_file_config = ServiceFileConfig( + node_id=node1.id, + service="DefaultRoute", + file="defaultroute.sh", + data="echo hello", + ) + service_file_configs = [service_file_config] # when with patch.object(CoreXmlWriter, "write"): with client.context_connect(): - client.start_session(session, definition=definition) + client.start_session( + session.id, + nodes, + links, + location, + hooks, + emane_config, + model_configs, + wlan_configs, + mobility_configs, + service_configs, + service_file_configs, + ) # then - real_session = grpc_server.coreemu.sessions[session.id] - if definition: - state = EventTypes.DEFINITION_STATE - else: - state = EventTypes.RUNTIME_STATE - assert real_session.state == state - assert node1.id in real_session.nodes - assert node2.id in real_session.nodes - assert wlan_node.id in real_session.nodes - assert iface1_id in real_session.nodes[node1.id].ifaces - assert iface2_id in real_session.nodes[node2.id].ifaces - hook_file, hook_data = real_session.hooks[EventTypes.RUNTIME_STATE][0] + assert node1.id in session.nodes + assert node2.id in session.nodes + assert wlan_node.id in session.nodes + assert iface1_id in session.nodes[node1.id].ifaces + assert iface2_id in session.nodes[node2.id].ifaces + hook_file, hook_data = session.hooks[EventTypes.RUNTIME_STATE][0] assert hook_file == hook.file assert hook_data == hook.data - assert real_session.location.refxyz == (location_x, location_y, location_z) - assert real_session.location.refgeo == ( - location_lat, - location_lon, - location_alt, - ) - assert real_session.location.refscale == location_scale - set_wlan_config = real_session.mobility.get_model_config( + assert session.location.refxyz == (location_x, location_y, location_z) + assert session.location.refgeo == (location_lat, location_lon, location_alt) + assert session.location.refscale == location_scale + assert session.emane.get_config(emane_config_key) == emane_config_value + set_wlan_config = session.mobility.get_model_config( wlan_node.id, BasicRangeModel.name ) assert set_wlan_config[wlan_config_key] == wlan_config_value - set_mobility_config = real_session.mobility.get_model_config( + set_mobility_config = session.mobility.get_model_config( wlan_node.id, Ns2ScriptedMobility.name ) assert set_mobility_config[mobility_config_key] == mobility_config_value - service = real_session.services.get_service( - node1.id, service_name, default_service=True + set_model_config = session.emane.get_model_config( + model_node_id, EmaneIeee80211abgModel.name ) - assert service.validate == tuple(service_validate) - real_node1 = real_session.get_node(node1.id, CoreNode) - service_file = real_session.services.get_service_file( - real_node1, service_name, service_file + assert set_model_config[model_config_key] == model_config_value + service = session.services.get_service( + node1.id, service_config.service, default_service=True ) - assert service_file.data == service_file_data - assert option_value == real_session.options.get(option_key) + assert service.validate == tuple(service_config.validate) + service_file = session.services.get_service_file( + node1, service_file_config.service, service_file_config.file + ) + assert service_file.data == service_file_config.data @pytest.mark.parametrize("session_id", [None, 6013]) def test_create_session( @@ -173,14 +171,16 @@ class TestGrpc: # when with client.context_connect(): - created_session = client.create_session(session_id) + response = client.create_session(session_id) # then - assert isinstance(created_session, wrappers.Session) - session = grpc_server.coreemu.sessions.get(created_session.id) + assert isinstance(response.session_id, int) + assert isinstance(response.state, int) + session = grpc_server.coreemu.sessions.get(response.session_id) assert session is not None + assert session.state == EventTypes(response.state) if session_id is not None: - assert created_session.id == session_id + assert response.session_id == session_id assert session.id == session_id @pytest.mark.parametrize("session_id, expected", [(None, True), (6013, False)]) @@ -195,10 +195,10 @@ class TestGrpc: # then with client.context_connect(): - result = client.delete_session(session_id) + response = client.delete_session(session_id) # then - assert result is expected + assert response.result is expected assert grpc_server.coreemu.sessions.get(session_id) is None def test_get_session(self, grpc_server: CoreGrpcServer): @@ -210,12 +210,12 @@ class TestGrpc: # then with client.context_connect(): - session = client.get_session(session.id) + response = client.get_session(session.id) # then - assert session.state == SessionState.DEFINITION - assert len(session.nodes) == 1 - assert len(session.links) == 0 + assert response.session.state == core_pb2.SessionState.DEFINITION + assert len(response.session.nodes) == 1 + assert len(response.session.links) == 0 def test_get_sessions(self, grpc_server: CoreGrpcServer): # given @@ -224,17 +224,136 @@ class TestGrpc: # then with client.context_connect(): - sessions = client.get_sessions() + response = client.get_sessions() # then found_session = None - for current_session in sessions: + for current_session in response.sessions: if current_session.id == session.id: found_session = current_session break - assert len(sessions) == 1 + assert len(response.sessions) == 1 assert found_session is not None + def test_get_session_options(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + with client.context_connect(): + response = client.get_session_options(session.id) + + # then + assert len(response.config) > 0 + + def test_get_session_location(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + with client.context_connect(): + response = client.get_session_location(session.id) + + # then + assert response.location.scale == 1.0 + assert response.location.x == 0 + assert response.location.y == 0 + assert response.location.z == 0 + assert response.location.lat == 0 + assert response.location.lon == 0 + assert response.location.alt == 0 + + def test_set_session_location(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + scale = 2 + xyz = (1, 1, 1) + lat_lon_alt = (1, 1, 1) + with client.context_connect(): + response = client.set_session_location( + session.id, + x=xyz[0], + y=xyz[1], + z=xyz[2], + lat=lat_lon_alt[0], + lon=lat_lon_alt[1], + alt=lat_lon_alt[2], + scale=scale, + ) + + # then + assert response.result is True + assert session.location.refxyz == xyz + assert session.location.refscale == scale + assert session.location.refgeo == lat_lon_alt + + def test_set_session_options(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + option = "enablerj45" + value = "1" + with client.context_connect(): + response = client.set_session_options(session.id, {option: value}) + + # then + assert response.result is True + assert session.options.get_config(option) == value + config = session.options.get_configs() + assert len(config) > 0 + + def test_set_session_metadata(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + key = "meta1" + value = "value1" + with client.context_connect(): + response = client.set_session_metadata(session.id, {key: value}) + + # then + assert response.result is True + assert session.metadata[key] == value + + def test_get_session_metadata(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + key = "meta1" + value = "value1" + session.metadata[key] = value + + # then + with client.context_connect(): + response = client.get_session_metadata(session.id) + + # then + assert response.config[key] == value + + def test_set_session_state(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + with client.context_connect(): + response = client.set_session_state( + session.id, core_pb2.SessionState.DEFINITION + ) + + # then + assert response.result is True + assert session.state == EventTypes.DEFINITION_STATE + def test_add_node(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() @@ -242,13 +361,12 @@ class TestGrpc: # then with client.context_connect(): - position = Position(x=0, y=0) - node = Node(id=1, name="n1", type=NodeType.DEFAULT, position=position) - node_id = client.add_node(session.id, node) + node = core_pb2.Node() + response = client.add_node(session.id, node) # then - assert node_id is not None - assert session.get_node(node_id, CoreNode) is not None + assert response.node_id is not None + assert session.get_node(response.node_id, CoreNode) is not None def test_get_node(self, grpc_server: CoreGrpcServer): # given @@ -258,70 +376,27 @@ class TestGrpc: # then with client.context_connect(): - get_node, ifaces, links = client.get_node(session.id, node.id) + response = client.get_node(session.id, node.id) # then - assert node.id == get_node.id - assert len(ifaces) == 0 - assert len(links) == 0 - - def test_move_node_pos(self, grpc_server: CoreGrpcServer): - # given - client = CoreGrpcClient() - session = grpc_server.coreemu.create_session() - node = session.add_node(CoreNode) - position = Position(x=100.0, y=50.0) - - # then - with client.context_connect(): - result = client.move_node(session.id, node.id, position=position) - - # then - assert result is True - assert node.position.x == position.x - assert node.position.y == position.y - - def test_move_node_geo(self, grpc_server: CoreGrpcServer): - # given - client = CoreGrpcClient() - session = grpc_server.coreemu.create_session() - node = session.add_node(CoreNode) - geo = Geo(lon=0.0, lat=0.0, alt=0.0) - - # then - with client.context_connect(): - result = client.move_node(session.id, node.id, geo=geo) - - # then - assert result is True - assert node.position.lon == geo.lon - assert node.position.lat == geo.lat - assert node.position.alt == geo.alt - - def test_move_node_exception(self, grpc_server: CoreGrpcServer): - # given - client = CoreGrpcClient() - session = grpc_server.coreemu.create_session() - node = session.add_node(CoreNode) - - # then and when - with pytest.raises(CoreError), client.context_connect(): - client.move_node(session.id, node.id) + assert response.node.id == node.id def test_edit_node(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() node = session.add_node(CoreNode) - icon = "test.png" # then + x, y = 10, 10 with client.context_connect(): - result = client.edit_node(session.id, node.id, icon) + position = core_pb2.Position(x=x, y=y) + response = client.edit_node(session.id, node.id, position) # then - assert result is True - assert node.icon == icon + assert response.result is True + assert node.position.x == x + assert node.position.y == y @pytest.mark.parametrize("node_id, expected", [(1, True), (2, False)]) def test_delete_node( @@ -334,10 +409,10 @@ class TestGrpc: # then with client.context_connect(): - result = client.delete_node(session.id, node_id) + response = client.delete_node(session.id, node_id) # then - assert result is expected + assert response.result is expected if expected is True: with pytest.raises(CoreError): assert session.get_node(node.id, CoreNode) @@ -350,33 +425,69 @@ class TestGrpc: client = CoreGrpcClient() session = grpc_server.coreemu.create_session() session.set_state(EventTypes.CONFIGURATION_STATE) - node = session.add_node(CoreNode) + options = NodeOptions(model="Host") + node = session.add_node(CoreNode, options=options) session.instantiate() - expected_output = "hello world" - expected_status = 0 + output = "hello world" # then - command = f"echo {expected_output}" + command = f"echo {output}" with client.context_connect(): - output = client.node_command(session.id, node.id, command) + response = client.node_command(session.id, node.id, command) # then - assert (expected_status, expected_output) == output + assert response.output == output def test_get_node_terminal(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() session.set_state(EventTypes.CONFIGURATION_STATE) - node = session.add_node(CoreNode) + options = NodeOptions(model="Host") + node = session.add_node(CoreNode, options=options) session.instantiate() # then with client.context_connect(): - terminal = client.get_node_terminal(session.id, node.id) + response = client.get_node_terminal(session.id, node.id) # then - assert terminal is not None + assert response.terminal is not None + + def test_get_hooks(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + file_name = "test" + file_data = "echo hello" + session.add_hook(EventTypes.RUNTIME_STATE, file_name, file_data) + + # then + with client.context_connect(): + response = client.get_hooks(session.id) + + # then + assert len(response.hooks) == 1 + hook = response.hooks[0] + assert hook.state == core_pb2.SessionState.RUNTIME + assert hook.file == file_name + assert hook.data == file_data + + def test_add_hook(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + file_name = "test" + file_data = "echo hello" + with client.context_connect(): + response = client.add_hook( + session.id, core_pb2.SessionState.RUNTIME, file_name, file_data + ) + + # then + assert response.result is True def test_save_xml(self, grpc_server: CoreGrpcServer, tmpdir: TemporaryFile): # given @@ -395,70 +506,102 @@ class TestGrpc: # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() - tmp = Path(tmpdir.join("text.xml")) - session.save_xml(tmp) + tmp = tmpdir.join("text.xml") + session.save_xml(str(tmp)) # then with client.context_connect(): - result, session_id = client.open_xml(tmp) + response = client.open_xml(str(tmp)) # then - assert result is True - assert session_id is not None + assert response.result is True + assert response.session_id is not None - def test_add_link(self, grpc_server: CoreGrpcServer): + def test_get_node_links(self, grpc_server: CoreGrpcServer, ip_prefixes: IpPrefixes): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() switch = session.add_node(SwitchNode) node = session.add_node(CoreNode) - assert len(session.link_manager.links()) == 0 - iface = InterfaceHelper("10.0.0.0/24").create_iface(node.id, 0) - link = Link(node.id, switch.id, iface1=iface) + iface_data = ip_prefixes.create_iface(node) + session.add_link(node.id, switch.id, iface_data) # then with client.context_connect(): - result, iface1, _ = client.add_link(session.id, link) + response = client.get_node_links(session.id, switch.id) # then - assert result is True - assert len(session.link_manager.links()) == 1 - assert iface1.id == iface.id - assert iface1.ip4 == iface.ip4 + assert len(response.links) == 1 - def test_add_link_exception(self, grpc_server: CoreGrpcServer): + def test_get_node_links_exception( + self, grpc_server: CoreGrpcServer, ip_prefixes: IpPrefixes + ): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + switch = session.add_node(SwitchNode) + node = session.add_node(CoreNode) + iface_data = ip_prefixes.create_iface(node) + session.add_link(node.id, switch.id, iface_data) + + # then + with pytest.raises(grpc.RpcError): + with client.context_connect(): + client.get_node_links(session.id, 3) + + def test_add_link(self, grpc_server: CoreGrpcServer, iface_helper: InterfaceHelper): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + switch = session.add_node(SwitchNode) + node = session.add_node(CoreNode) + assert len(switch.links()) == 0 + + # then + iface = iface_helper.create_iface(node.id, 0) + with client.context_connect(): + response = client.add_link(session.id, node.id, switch.id, iface) + + # then + assert response.result is True + assert len(switch.links()) == 1 + + def test_add_link_exception( + self, grpc_server: CoreGrpcServer, iface_helper: InterfaceHelper + ): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() node = session.add_node(CoreNode) # then - link = Link(node.id, 3) + iface = iface_helper.create_iface(node.id, 0) with pytest.raises(grpc.RpcError): with client.context_connect(): - client.add_link(session.id, link) + client.add_link(session.id, 1, 3, iface) def test_edit_link(self, grpc_server: CoreGrpcServer, ip_prefixes: IpPrefixes): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() - session.set_state(EventTypes.CONFIGURATION_STATE) switch = session.add_node(SwitchNode) node = session.add_node(CoreNode) - iface_data = ip_prefixes.create_iface(node) - iface, _ = session.add_link(node.id, switch.id, iface_data) - session.instantiate() - options = LinkOptions(bandwidth=30000) - assert iface.options.bandwidth != options.bandwidth - link = Link(node.id, switch.id, iface1=Interface(id=iface.id), options=options) + iface = ip_prefixes.create_iface(node) + session.add_link(node.id, switch.id, iface) + options = core_pb2.LinkOptions(bandwidth=30000) + link = switch.links()[0] + assert options.bandwidth != link.options.bandwidth # then with client.context_connect(): - result = client.edit_link(session.id, link) + response = client.edit_link( + session.id, node.id, switch.id, options, iface1_id=iface.id + ) # then - assert result is True - assert options.bandwidth == iface.options.bandwidth + assert response.result is True + link = switch.links()[0] + assert options.bandwidth == link.options.bandwidth def test_delete_link(self, grpc_server: CoreGrpcServer, ip_prefixes: IpPrefixes): # given @@ -469,21 +612,23 @@ class TestGrpc: node2 = session.add_node(CoreNode) iface2 = ip_prefixes.create_iface(node2) session.add_link(node1.id, node2.id, iface1, iface2) - assert len(session.link_manager.links()) == 1 - link = Link( - node1.id, - node2.id, - iface1=Interface(id=iface1.id), - iface2=Interface(id=iface2.id), - ) + link_node = None + for node_id in session.nodes: + node = session.nodes[node_id] + if node.id not in {node1.id, node2.id}: + link_node = node + break + assert len(link_node.links()) == 1 # then with client.context_connect(): - result = client.delete_link(session.id, link) + response = client.delete_link( + session.id, node1.id, node2.id, iface1.id, iface2.id + ) # then - assert result is True - assert len(session.link_manager.links()) == 0 + assert response.result is True + assert len(link_node.links()) == 0 def test_get_wlan_config(self, grpc_server: CoreGrpcServer): # given @@ -493,10 +638,10 @@ class TestGrpc: # then with client.context_connect(): - config = client.get_wlan_config(session.id, wlan.id) + response = client.get_wlan_config(session.id, wlan.id) # then - assert len(config) > 0 + assert len(response.config) > 0 def test_set_wlan_config(self, grpc_server: CoreGrpcServer): # given @@ -511,7 +656,7 @@ class TestGrpc: # then with client.context_connect(): - result = client.set_wlan_config( + response = client.set_wlan_config( session.id, wlan.id, { @@ -525,40 +670,91 @@ class TestGrpc: ) # then - assert result is True + assert response.result is True config = session.mobility.get_model_config(wlan.id, BasicRangeModel.name) assert config[range_key] == range_value - assert wlan.wireless_model.range == int(range_value) + assert wlan.model.range == int(range_value) + + def test_get_emane_config(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + with client.context_connect(): + response = client.get_emane_config(session.id) + + # then + assert len(response.config) > 0 + + def test_set_emane_config(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + config_key = "platform_id_start" + config_value = "2" + + # then + with client.context_connect(): + response = client.set_emane_config(session.id, {config_key: config_value}) + + # then + assert response.result is True + config = session.emane.get_configs() + assert len(config) > 1 + assert config[config_key] == config_value + + def test_get_emane_model_configs(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + session.set_location(47.57917, -122.13232, 2.00000, 1.0) + options = NodeOptions(emane=EmaneIeee80211abgModel.name) + emane_network = session.add_node(EmaneNet, options=options) + session.emane.set_model(emane_network, EmaneIeee80211abgModel) + config_key = "platform_id_start" + config_value = "2" + session.emane.set_model_config( + emane_network.id, EmaneIeee80211abgModel.name, {config_key: config_value} + ) + + # then + with client.context_connect(): + response = client.get_emane_model_configs(session.id) + + # then + assert len(response.configs) == 1 + model_config = response.configs[0] + assert emane_network.id == model_config.node_id + assert model_config.model == EmaneIeee80211abgModel.name + assert len(model_config.config) > 0 + assert model_config.iface_id == -1 def test_set_emane_model_config(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() session.set_location(47.57917, -122.13232, 2.00000, 1.0) - options = EmaneNet.create_options() - options.emane_model = EmaneIeee80211abgModel.name + options = NodeOptions(emane=EmaneIeee80211abgModel.name) emane_network = session.add_node(EmaneNet, options=options) - session.emane.node_models[emane_network.id] = EmaneIeee80211abgModel.name + session.emane.set_model(emane_network, EmaneIeee80211abgModel) config_key = "bandwidth" config_value = "900000" - option = ConfigOption( - label=config_key, - name=config_key, - value=config_value, - type=ConfigOptionType.INT32, - group="Default", - ) - config = EmaneModelConfig( - emane_network.id, EmaneIeee80211abgModel.name, config={config_key: option} - ) # then with client.context_connect(): - result = client.set_emane_model_config(session.id, config) + response = client.set_emane_model_config( + session.id, + emane_network.id, + EmaneIeee80211abgModel.name, + {config_key: config_value}, + ) # then - assert result is True - config = session.emane.get_config(emane_network.id, EmaneIeee80211abgModel.name) + assert response.result is True + config = session.emane.get_model_config( + emane_network.id, EmaneIeee80211abgModel.name + ) assert config[config_key] == config_value def test_get_emane_model_config(self, grpc_server: CoreGrpcServer): @@ -566,19 +762,47 @@ class TestGrpc: client = CoreGrpcClient() session = grpc_server.coreemu.create_session() session.set_location(47.57917, -122.13232, 2.00000, 1.0) - options = EmaneNet.create_options() - options.emane_model = EmaneIeee80211abgModel.name + options = NodeOptions(emane=EmaneIeee80211abgModel.name) emane_network = session.add_node(EmaneNet, options=options) - session.emane.node_models[emane_network.id] = EmaneIeee80211abgModel.name + session.emane.set_model(emane_network, EmaneIeee80211abgModel) # then with client.context_connect(): - config = client.get_emane_model_config( + response = client.get_emane_model_config( session.id, emane_network.id, EmaneIeee80211abgModel.name ) # then - assert len(config) > 0 + assert len(response.config) > 0 + + def test_get_emane_models(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + + # then + with client.context_connect(): + response = client.get_emane_models(session.id) + + # then + assert len(response.models) > 0 + + def test_get_mobility_configs(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + wlan = session.add_node(WlanNode) + session.mobility.set_model_config(wlan.id, Ns2ScriptedMobility.name, {}) + + # then + with client.context_connect(): + response = client.get_mobility_configs(session.id) + + # then + assert len(response.configs) > 0 + assert wlan.id in response.configs + mapped_config = response.configs[wlan.id] + assert len(mapped_config.config) > 0 def test_get_mobility_config(self, grpc_server: CoreGrpcServer): # given @@ -589,10 +813,10 @@ class TestGrpc: # then with client.context_connect(): - config = client.get_mobility_config(session.id, wlan.id) + response = client.get_mobility_config(session.id, wlan.id) # then - assert len(config) > 0 + assert len(response.config) > 0 def test_set_mobility_config(self, grpc_server: CoreGrpcServer): # given @@ -604,12 +828,12 @@ class TestGrpc: # then with client.context_connect(): - result = client.set_mobility_config( + response = client.set_mobility_config( session.id, wlan.id, {config_key: config_value} ) # then - assert result is True + assert response.result is True config = session.mobility.get_model_config(wlan.id, Ns2ScriptedMobility.name) assert config[config_key] == config_value @@ -623,10 +847,21 @@ class TestGrpc: # then with client.context_connect(): - result = client.mobility_action(session.id, wlan.id, MobilityAction.STOP) + response = client.mobility_action(session.id, wlan.id, MobilityAction.STOP) # then - assert result is True + assert response.result is True + + def test_get_services(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + + # then + with client.context_connect(): + response = client.get_services() + + # then + assert len(response.services) > 0 def test_get_service_defaults(self, grpc_server: CoreGrpcServer): # given @@ -635,25 +870,43 @@ class TestGrpc: # then with client.context_connect(): - defaults = client.get_service_defaults(session.id) + response = client.get_service_defaults(session.id) # then - assert len(defaults) > 0 + assert len(response.defaults) > 0 def test_set_service_defaults(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() - model = "test" + node_type = "test" services = ["SSH"] # then with client.context_connect(): - result = client.set_service_defaults(session.id, {model: services}) + response = client.set_service_defaults(session.id, {node_type: services}) # then - assert result is True - assert session.services.default_services[model] == services + assert response.result is True + assert session.services.default_services[node_type] == services + + def test_get_node_service_configs(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + node = session.add_node(CoreNode) + service_name = "DefaultRoute" + session.services.set_service(node.id, service_name) + + # then + with client.context_connect(): + response = client.get_node_service_configs(session.id) + + # then + assert len(response.configs) == 1 + service_config = response.configs[0] + assert service_config.node_id == node.id + assert service_config.service == service_name def test_get_node_service(self, grpc_server: CoreGrpcServer): # given @@ -663,10 +916,10 @@ class TestGrpc: # then with client.context_connect(): - service = client.get_node_service(session.id, node.id, "DefaultRoute") + response = client.get_node_service(session.id, node.id, "DefaultRoute") # then - assert len(service.configs) > 0 + assert len(response.service.configs) > 0 def test_get_node_service_file(self, grpc_server: CoreGrpcServer): # given @@ -676,32 +929,55 @@ class TestGrpc: # then with client.context_connect(): - data = client.get_node_service_file( + response = client.get_node_service_file( session.id, node.id, "DefaultRoute", "defaultroute.sh" ) # then - assert data is not None + assert response.data is not None - def test_service_action(self, grpc_server: CoreGrpcServer): + def test_set_node_service(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() - options = CoreNode.create_options() - options.legacy = True - node = session.add_node(CoreNode, options=options) + node = session.add_node(CoreNode) service_name = "DefaultRoute" + validate = ["echo hello"] # then with client.context_connect(): - result = client.service_action( - session.id, node.id, service_name, ServiceAction.STOP + response = client.set_node_service( + session.id, node.id, service_name, validate=validate ) # then - assert result is True + assert response.result is True + service = session.services.get_service( + node.id, service_name, default_service=True + ) + assert service.validate == tuple(validate) - def test_config_service_action(self, grpc_server: CoreGrpcServer): + def test_set_node_service_file(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + node = session.add_node(CoreNode) + service_name = "DefaultRoute" + file_name = "defaultroute.sh" + file_data = "echo hello" + + # then + with client.context_connect(): + response = client.set_node_service_file( + session.id, node.id, service_name, file_name, file_data + ) + + # then + assert response.result is True + service_file = session.services.get_service_file(node, service_name, file_name) + assert service_file.data == file_data + + def test_service_action(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() session = grpc_server.coreemu.create_session() @@ -710,12 +986,12 @@ class TestGrpc: # then with client.context_connect(): - result = client.config_service_action( + response = client.service_action( session.id, node.id, service_name, ServiceAction.STOP ) # then - assert result is True + assert response.result is True def test_node_events(self, grpc_server: CoreGrpcServer): # given @@ -727,14 +1003,14 @@ class TestGrpc: node.position.alt = 5.0 queue = Queue() - def handle_event(event: Event) -> None: - assert event.session_id == session.id - assert event.node_event is not None - event_node = event.node_event.node + def handle_event(event_data): + assert event_data.session_id == session.id + assert event_data.HasField("node_event") + event_node = event_data.node_event.node assert event_node.geo.lat == node.position.lat assert event_node.geo.lon == node.position.lon assert event_node.geo.alt == node.position.alt - queue.put(event) + queue.put(event_data) # then with client.context_connect(): @@ -751,17 +1027,15 @@ class TestGrpc: session = grpc_server.coreemu.create_session() wlan = session.add_node(WlanNode) node = session.add_node(CoreNode) - iface_data = ip_prefixes.create_iface(node) - session.add_link(node.id, wlan.id, iface_data) - core_link = list(session.link_manager.links())[0] - link_data = core_link.get_data(MessageFlags.ADD) - + iface = ip_prefixes.create_iface(node) + session.add_link(node.id, wlan.id, iface) + link_data = wlan.links()[0] queue = Queue() - def handle_event(event: Event) -> None: - assert event.session_id == session.id - assert event.link_event is not None - queue.put(event) + def handle_event(event_data): + assert event_data.session_id == session.id + assert event_data.HasField("link_event") + queue.put(event_data) # then with client.context_connect(): @@ -799,19 +1073,43 @@ class TestGrpc: session = grpc_server.coreemu.create_session() queue = Queue() - def handle_event(event: Event) -> None: - assert event.session_id == session.id - assert event.session_event is not None - queue.put(event) + def handle_event(event_data): + assert event_data.session_id == session.id + assert event_data.HasField("session_event") + queue.put(event_data) # then with client.context_connect(): client.events(session.id, handle_event) time.sleep(0.1) - event_data = EventData( + event = EventData( event_type=EventTypes.RUNTIME_STATE, time=str(time.monotonic()) ) - session.broadcast_event(event_data) + session.broadcast_event(event) + + # then + queue.get(timeout=5) + + def test_config_events(self, grpc_server: CoreGrpcServer): + # given + client = CoreGrpcClient() + session = grpc_server.coreemu.create_session() + queue = Queue() + + def handle_event(event_data): + assert event_data.session_id == session.id + assert event_data.HasField("config_event") + queue.put(event_data) + + # then + with client.context_connect(): + client.events(session.id, handle_event) + time.sleep(0.1) + session_config = session.options.get_configs() + config_data = ConfigShim.config_data( + 0, None, ConfigFlags.UPDATE.value, session.options, session_config + ) + session.broadcast_config(config_data) # then queue.get(timeout=5) @@ -826,15 +1124,15 @@ class TestGrpc: node_id = None text = "exception message" - def handle_event(event: Event) -> None: - assert event.session_id == session.id - assert event.exception_event is not None - exception_event = event.exception_event - assert exception_event.level.value == exception_level.value + def handle_event(event_data): + assert event_data.session_id == session.id + assert event_data.HasField("exception_event") + exception_event = event_data.exception_event + assert exception_event.level == exception_level.value assert exception_event.node_id == 0 assert exception_event.source == source assert exception_event.text == text - queue.put(event) + queue.put(event_data) # then with client.context_connect(): @@ -852,10 +1150,10 @@ class TestGrpc: node = session.add_node(CoreNode) queue = Queue() - def handle_event(event: Event) -> None: - assert event.session_id == session.id - assert event.file_event is not None - queue.put(event) + def handle_event(event_data): + assert event_data.session_id == session.id + assert event_data.HasField("file_event") + queue.put(event_data) # then with client.context_connect(): @@ -875,13 +1173,17 @@ class TestGrpc: session = grpc_server.coreemu.create_session() node = session.add_node(CoreNode) x, y = 10.0, 15.0 - streamer = MoveNodesStreamer(session.id) - streamer.send_position(node.id, x, y) - streamer.stop() + + def move_iter(): + yield core_pb2.MoveNodesRequest( + session_id=session.id, + node_id=node.id, + position=core_pb2.Position(x=x, y=y), + ) # then with client.context_connect(): - client.move_nodes(streamer) + client.move_nodes(move_iter()) # assert assert node.position.x == x @@ -893,9 +1195,6 @@ class TestGrpc: session = grpc_server.coreemu.create_session() node = session.add_node(CoreNode) lon, lat, alt = 10.0, 15.0, 5.0 - streamer = MoveNodesStreamer(session.id) - streamer.send_geo(node.id, lon, lat, alt) - streamer.stop() queue = Queue() def node_handler(node_data: NodeData): @@ -907,50 +1206,32 @@ class TestGrpc: session.node_handlers.append(node_handler) + def move_iter(): + yield core_pb2.MoveNodesRequest( + session_id=session.id, + node_id=node.id, + geo=core_pb2.Geo(lon=lon, lat=lat, alt=alt), + ) + # then with client.context_connect(): - client.move_nodes(streamer) + client.move_nodes(move_iter()) # assert - assert queue.get(timeout=5) assert node.position.lon == lon assert node.position.lat == lat assert node.position.alt == alt + assert queue.get(timeout=5) def test_move_nodes_exception(self, grpc_server: CoreGrpcServer): # given client = CoreGrpcClient() - session = grpc_server.coreemu.create_session() - streamer = MoveNodesStreamer(session.id) - request = MoveNodesRequest(session.id + 1, 1) - streamer.send(request) - streamer.stop() + grpc_server.coreemu.create_session() + + def move_iter(): + yield core_pb2.MoveNodesRequest() # then with pytest.raises(grpc.RpcError): with client.context_connect(): - client.move_nodes(streamer) - - def test_wlan_link(self, grpc_server: CoreGrpcServer, ip_prefixes: IpPrefixes): - # given - client = CoreGrpcClient() - session = grpc_server.coreemu.create_session() - session.set_state(EventTypes.CONFIGURATION_STATE) - wlan = session.add_node(WlanNode) - node1 = session.add_node(CoreNode) - node2 = session.add_node(CoreNode) - iface1_data = ip_prefixes.create_iface(node1) - iface2_data = ip_prefixes.create_iface(node2) - session.add_link(node1.id, wlan.id, iface1_data) - session.add_link(node2.id, wlan.id, iface2_data) - session.instantiate() - assert len(session.link_manager.links()) == 2 - - # when - with client.context_connect(): - result1 = client.wlan_link(session.id, wlan.id, node1.id, node2.id, True) - result2 = client.wlan_link(session.id, wlan.id, node1.id, node2.id, False) - - # then - assert result1 is True - assert result2 is True + client.move_nodes(move_iter()) diff --git a/daemon/tests/test_gui.py b/daemon/tests/test_gui.py new file mode 100644 index 00000000..a0b3bd8a --- /dev/null +++ b/daemon/tests/test_gui.py @@ -0,0 +1,976 @@ +""" +Tests for testing tlv message handling. +""" +import os +import time +from typing import Optional + +import mock +import netaddr +import pytest +from mock import MagicMock + +from core.api.tlv import coreapi +from core.api.tlv.corehandlers import CoreHandler +from core.api.tlv.enumerations import ( + ConfigFlags, + ConfigTlvs, + EventTlvs, + ExecuteTlvs, + FileTlvs, + LinkTlvs, + NodeTlvs, + SessionTlvs, +) +from core.emane.ieee80211abg import EmaneIeee80211abgModel +from core.emulator.enumerations import EventTypes, MessageFlags, NodeTypes, RegisterTlvs +from core.errors import CoreError +from core.location.mobility import BasicRangeModel +from core.nodes.base import CoreNode, NodeBase +from core.nodes.network import SwitchNode, WlanNode + + +def dict_to_str(values) -> str: + return "|".join(f"{x}={values[x]}" for x in values) + + +class TestGui: + @pytest.mark.parametrize( + "node_type, model", + [ + (NodeTypes.DEFAULT, "PC"), + (NodeTypes.EMANE, None), + (NodeTypes.HUB, None), + (NodeTypes.SWITCH, None), + (NodeTypes.WIRELESS_LAN, None), + (NodeTypes.TUNNEL, None), + ], + ) + def test_node_add( + self, coretlv: CoreHandler, node_type: NodeTypes, model: Optional[str] + ): + node_id = 1 + name = "node1" + message = coreapi.CoreNodeMessage.create( + MessageFlags.ADD.value, + [ + (NodeTlvs.NUMBER, node_id), + (NodeTlvs.TYPE, node_type.value), + (NodeTlvs.NAME, name), + (NodeTlvs.X_POSITION, 0), + (NodeTlvs.Y_POSITION, 0), + (NodeTlvs.MODEL, model), + ], + ) + + coretlv.handle_message(message) + node = coretlv.session.get_node(node_id, NodeBase) + assert node + assert node.name == name + + def test_node_update(self, coretlv: CoreHandler): + node_id = 1 + coretlv.session.add_node(CoreNode, _id=node_id) + x = 50 + y = 100 + message = coreapi.CoreNodeMessage.create( + 0, + [ + (NodeTlvs.NUMBER, node_id), + (NodeTlvs.X_POSITION, x), + (NodeTlvs.Y_POSITION, y), + ], + ) + + coretlv.handle_message(message) + + node = coretlv.session.get_node(node_id, NodeBase) + assert node is not None + assert node.position.x == x + assert node.position.y == y + + def test_node_delete(self, coretlv: CoreHandler): + node_id = 1 + coretlv.session.add_node(CoreNode, _id=node_id) + message = coreapi.CoreNodeMessage.create( + MessageFlags.DELETE.value, [(NodeTlvs.NUMBER, node_id)] + ) + + coretlv.handle_message(message) + + with pytest.raises(CoreError): + coretlv.session.get_node(node_id, NodeBase) + + def test_link_add_node_to_net(self, coretlv: CoreHandler): + node1_id = 1 + coretlv.session.add_node(CoreNode, _id=node1_id) + switch_id = 2 + coretlv.session.add_node(SwitchNode, _id=switch_id) + ip_prefix = netaddr.IPNetwork("10.0.0.0/24") + iface1_ip4 = str(ip_prefix[node1_id]) + message = coreapi.CoreLinkMessage.create( + MessageFlags.ADD.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, switch_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.IFACE1_IP4, iface1_ip4), + (LinkTlvs.IFACE1_IP4_MASK, 24), + ], + ) + + coretlv.handle_message(message) + + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 1 + + def test_link_add_net_to_node(self, coretlv: CoreHandler): + node1_id = 1 + coretlv.session.add_node(CoreNode, _id=node1_id) + switch_id = 2 + coretlv.session.add_node(SwitchNode, _id=switch_id) + ip_prefix = netaddr.IPNetwork("10.0.0.0/24") + iface2_ip4 = str(ip_prefix[node1_id]) + message = coreapi.CoreLinkMessage.create( + MessageFlags.ADD.value, + [ + (LinkTlvs.N1_NUMBER, switch_id), + (LinkTlvs.N2_NUMBER, node1_id), + (LinkTlvs.IFACE2_NUMBER, 0), + (LinkTlvs.IFACE2_IP4, iface2_ip4), + (LinkTlvs.IFACE2_IP4_MASK, 24), + ], + ) + + coretlv.handle_message(message) + + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 1 + + def test_link_add_node_to_node(self, coretlv: CoreHandler): + node1_id = 1 + coretlv.session.add_node(CoreNode, _id=node1_id) + node2_id = 2 + coretlv.session.add_node(CoreNode, _id=node2_id) + ip_prefix = netaddr.IPNetwork("10.0.0.0/24") + iface1_ip4 = str(ip_prefix[node1_id]) + iface2_ip4 = str(ip_prefix[node2_id]) + message = coreapi.CoreLinkMessage.create( + MessageFlags.ADD.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, node2_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.IFACE1_IP4, iface1_ip4), + (LinkTlvs.IFACE1_IP4_MASK, 24), + (LinkTlvs.IFACE2_NUMBER, 0), + (LinkTlvs.IFACE2_IP4, iface2_ip4), + (LinkTlvs.IFACE2_IP4_MASK, 24), + ], + ) + + coretlv.handle_message(message) + + all_links = [] + for node_id in coretlv.session.nodes: + node = coretlv.session.nodes[node_id] + all_links += node.links() + assert len(all_links) == 1 + + def test_link_update(self, coretlv: CoreHandler): + node1_id = 1 + coretlv.session.add_node(CoreNode, _id=node1_id) + switch_id = 2 + coretlv.session.add_node(SwitchNode, _id=switch_id) + ip_prefix = netaddr.IPNetwork("10.0.0.0/24") + iface1_ip4 = str(ip_prefix[node1_id]) + message = coreapi.CoreLinkMessage.create( + MessageFlags.ADD.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, switch_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.IFACE1_IP4, iface1_ip4), + (LinkTlvs.IFACE1_IP4_MASK, 24), + ], + ) + coretlv.handle_message(message) + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 1 + link = all_links[0] + assert link.options.bandwidth is None + + bandwidth = 50000 + message = coreapi.CoreLinkMessage.create( + 0, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, switch_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.BANDWIDTH, bandwidth), + ], + ) + coretlv.handle_message(message) + + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 1 + link = all_links[0] + assert link.options.bandwidth == bandwidth + + def test_link_delete_node_to_node(self, coretlv: CoreHandler): + node1_id = 1 + coretlv.session.add_node(CoreNode, _id=node1_id) + node2_id = 2 + coretlv.session.add_node(CoreNode, _id=node2_id) + ip_prefix = netaddr.IPNetwork("10.0.0.0/24") + iface1_ip4 = str(ip_prefix[node1_id]) + iface2_ip4 = str(ip_prefix[node2_id]) + message = coreapi.CoreLinkMessage.create( + MessageFlags.ADD.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, node2_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.IFACE1_IP4, iface1_ip4), + (LinkTlvs.IFACE1_IP4_MASK, 24), + (LinkTlvs.IFACE2_IP4, iface2_ip4), + (LinkTlvs.IFACE2_IP4_MASK, 24), + ], + ) + coretlv.handle_message(message) + all_links = [] + for node_id in coretlv.session.nodes: + node = coretlv.session.nodes[node_id] + all_links += node.links() + assert len(all_links) == 1 + + message = coreapi.CoreLinkMessage.create( + MessageFlags.DELETE.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, node2_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.IFACE2_NUMBER, 0), + ], + ) + coretlv.handle_message(message) + + all_links = [] + for node_id in coretlv.session.nodes: + node = coretlv.session.nodes[node_id] + all_links += node.links() + assert len(all_links) == 0 + + def test_link_delete_node_to_net(self, coretlv: CoreHandler): + node1_id = 1 + coretlv.session.add_node(CoreNode, _id=node1_id) + switch_id = 2 + coretlv.session.add_node(SwitchNode, _id=switch_id) + ip_prefix = netaddr.IPNetwork("10.0.0.0/24") + iface1_ip4 = str(ip_prefix[node1_id]) + message = coreapi.CoreLinkMessage.create( + MessageFlags.ADD.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, switch_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.IFACE1_IP4, iface1_ip4), + (LinkTlvs.IFACE1_IP4_MASK, 24), + ], + ) + coretlv.handle_message(message) + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 1 + + message = coreapi.CoreLinkMessage.create( + MessageFlags.DELETE.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, switch_id), + (LinkTlvs.IFACE1_NUMBER, 0), + ], + ) + coretlv.handle_message(message) + + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 0 + + def test_link_delete_net_to_node(self, coretlv: CoreHandler): + node1_id = 1 + coretlv.session.add_node(CoreNode, _id=node1_id) + switch_id = 2 + coretlv.session.add_node(SwitchNode, _id=switch_id) + ip_prefix = netaddr.IPNetwork("10.0.0.0/24") + iface1_ip4 = str(ip_prefix[node1_id]) + message = coreapi.CoreLinkMessage.create( + MessageFlags.ADD.value, + [ + (LinkTlvs.N1_NUMBER, node1_id), + (LinkTlvs.N2_NUMBER, switch_id), + (LinkTlvs.IFACE1_NUMBER, 0), + (LinkTlvs.IFACE1_IP4, iface1_ip4), + (LinkTlvs.IFACE1_IP4_MASK, 24), + ], + ) + coretlv.handle_message(message) + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 1 + + message = coreapi.CoreLinkMessage.create( + MessageFlags.DELETE.value, + [ + (LinkTlvs.N1_NUMBER, switch_id), + (LinkTlvs.N2_NUMBER, node1_id), + (LinkTlvs.IFACE2_NUMBER, 0), + ], + ) + coretlv.handle_message(message) + + switch_node = coretlv.session.get_node(switch_id, SwitchNode) + all_links = switch_node.links() + assert len(all_links) == 0 + + def test_session_update(self, coretlv: CoreHandler): + session_id = coretlv.session.id + name = "test" + message = coreapi.CoreSessionMessage.create( + 0, [(SessionTlvs.NUMBER, str(session_id)), (SessionTlvs.NAME, name)] + ) + + coretlv.handle_message(message) + + assert coretlv.session.name == name + + def test_session_query(self, coretlv: CoreHandler): + coretlv.dispatch_replies = mock.MagicMock() + message = coreapi.CoreSessionMessage.create(MessageFlags.STRING.value, []) + + coretlv.handle_message(message) + + args, _ = coretlv.dispatch_replies.call_args + replies = args[0] + assert len(replies) == 1 + + def test_session_join(self, coretlv: CoreHandler): + coretlv.dispatch_replies = mock.MagicMock() + session_id = coretlv.session.id + message = coreapi.CoreSessionMessage.create( + MessageFlags.ADD.value, [(SessionTlvs.NUMBER, str(session_id))] + ) + + coretlv.handle_message(message) + + assert coretlv.session.id == session_id + + def test_session_delete(self, coretlv: CoreHandler): + assert len(coretlv.coreemu.sessions) == 1 + session_id = coretlv.session.id + message = coreapi.CoreSessionMessage.create( + MessageFlags.DELETE.value, [(SessionTlvs.NUMBER, str(session_id))] + ) + + coretlv.handle_message(message) + + assert len(coretlv.coreemu.sessions) == 0 + + def test_file_hook_add(self, coretlv: CoreHandler): + state = EventTypes.DATACOLLECT_STATE + assert coretlv.session.hooks.get(state) is None + file_name = "test.sh" + file_data = "echo hello" + message = coreapi.CoreFileMessage.create( + MessageFlags.ADD.value, + [ + (FileTlvs.TYPE, f"hook:{state.value}"), + (FileTlvs.NAME, file_name), + (FileTlvs.DATA, file_data), + ], + ) + + coretlv.handle_message(message) + + hooks = coretlv.session.hooks.get(state) + assert len(hooks) == 1 + name, data = hooks[0] + assert file_name == name + assert file_data == data + + def test_file_service_file_set(self, coretlv: CoreHandler): + node = coretlv.session.add_node(CoreNode) + service = "DefaultRoute" + file_name = "defaultroute.sh" + file_data = "echo hello" + message = coreapi.CoreFileMessage.create( + MessageFlags.ADD.value, + [ + (FileTlvs.NODE, node.id), + (FileTlvs.TYPE, f"service:{service}"), + (FileTlvs.NAME, file_name), + (FileTlvs.DATA, file_data), + ], + ) + + coretlv.handle_message(message) + + service_file = coretlv.session.services.get_service_file( + node, service, file_name + ) + assert file_data == service_file.data + + def test_file_node_file_copy(self, request, coretlv: CoreHandler): + file_name = "/var/log/test/node.log" + node = coretlv.session.add_node(CoreNode) + node.makenodedir() + file_data = "echo hello" + message = coreapi.CoreFileMessage.create( + MessageFlags.ADD.value, + [ + (FileTlvs.NODE, node.id), + (FileTlvs.NAME, file_name), + (FileTlvs.DATA, file_data), + ], + ) + + coretlv.handle_message(message) + + if not request.config.getoption("mock"): + directory, basename = os.path.split(file_name) + created_directory = directory[1:].replace("/", ".") + create_path = os.path.join(node.nodedir, created_directory, basename) + assert os.path.exists(create_path) + + def test_exec_node_tty(self, coretlv: CoreHandler): + coretlv.dispatch_replies = mock.MagicMock() + node = coretlv.session.add_node(CoreNode) + message = coreapi.CoreExecMessage.create( + MessageFlags.TTY.value, + [ + (ExecuteTlvs.NODE, node.id), + (ExecuteTlvs.NUMBER, 1), + (ExecuteTlvs.COMMAND, "bash"), + ], + ) + + coretlv.handle_message(message) + + args, _ = coretlv.dispatch_replies.call_args + replies = args[0] + assert len(replies) == 1 + + def test_exec_local_command(self, request, coretlv: CoreHandler): + if request.config.getoption("mock"): + pytest.skip("mocking calls") + + coretlv.dispatch_replies = mock.MagicMock() + node = coretlv.session.add_node(CoreNode) + cmd = "echo hello" + message = coreapi.CoreExecMessage.create( + MessageFlags.TEXT.value | MessageFlags.LOCAL.value, + [ + (ExecuteTlvs.NODE, node.id), + (ExecuteTlvs.NUMBER, 1), + (ExecuteTlvs.COMMAND, cmd), + ], + ) + + coretlv.handle_message(message) + + args, _ = coretlv.dispatch_replies.call_args + replies = args[0] + assert len(replies) == 1 + + def test_exec_node_command(self, coretlv: CoreHandler): + coretlv.dispatch_replies = mock.MagicMock() + node = coretlv.session.add_node(CoreNode) + cmd = "echo hello" + message = coreapi.CoreExecMessage.create( + MessageFlags.TEXT.value, + [ + (ExecuteTlvs.NODE, node.id), + (ExecuteTlvs.NUMBER, 1), + (ExecuteTlvs.COMMAND, cmd), + ], + ) + node.cmd = MagicMock(return_value="hello") + + coretlv.handle_message(message) + + node.cmd.assert_called_with(cmd) + + @pytest.mark.parametrize( + "state", + [ + EventTypes.SHUTDOWN_STATE, + EventTypes.RUNTIME_STATE, + EventTypes.DATACOLLECT_STATE, + EventTypes.CONFIGURATION_STATE, + EventTypes.DEFINITION_STATE, + ], + ) + def test_event_state(self, coretlv: CoreHandler, state: EventTypes): + message = coreapi.CoreEventMessage.create(0, [(EventTlvs.TYPE, state.value)]) + + coretlv.handle_message(message) + + assert coretlv.session.state == state + + def test_event_schedule(self, coretlv: CoreHandler): + coretlv.session.add_event = mock.MagicMock() + node = coretlv.session.add_node(CoreNode) + message = coreapi.CoreEventMessage.create( + MessageFlags.ADD.value, + [ + (EventTlvs.TYPE, EventTypes.SCHEDULED.value), + (EventTlvs.TIME, str(time.monotonic() + 100)), + (EventTlvs.NODE, node.id), + (EventTlvs.NAME, "event"), + (EventTlvs.DATA, "data"), + ], + ) + + coretlv.handle_message(message) + + coretlv.session.add_event.assert_called_once() + + def test_event_save_xml(self, coretlv: CoreHandler, tmpdir): + xml_file = tmpdir.join("coretlv.session.xml") + file_path = xml_file.strpath + coretlv.session.add_node(CoreNode) + message = coreapi.CoreEventMessage.create( + 0, + [(EventTlvs.TYPE, EventTypes.FILE_SAVE.value), (EventTlvs.NAME, file_path)], + ) + + coretlv.handle_message(message) + + assert os.path.exists(file_path) + + def test_event_open_xml(self, coretlv: CoreHandler, tmpdir): + xml_file = tmpdir.join("coretlv.session.xml") + file_path = xml_file.strpath + node = coretlv.session.add_node(CoreNode) + coretlv.session.save_xml(file_path) + coretlv.session.delete_node(node.id) + message = coreapi.CoreEventMessage.create( + 0, + [(EventTlvs.TYPE, EventTypes.FILE_OPEN.value), (EventTlvs.NAME, file_path)], + ) + + coretlv.handle_message(message) + assert coretlv.session.get_node(node.id, NodeBase) + + @pytest.mark.parametrize( + "state", + [ + EventTypes.START, + EventTypes.STOP, + EventTypes.RESTART, + EventTypes.PAUSE, + EventTypes.RECONFIGURE, + ], + ) + def test_event_service(self, coretlv: CoreHandler, state: EventTypes): + coretlv.session.broadcast_event = mock.MagicMock() + node = coretlv.session.add_node(CoreNode) + message = coreapi.CoreEventMessage.create( + 0, + [ + (EventTlvs.TYPE, state.value), + (EventTlvs.NODE, node.id), + (EventTlvs.NAME, "service:DefaultRoute"), + ], + ) + + coretlv.handle_message(message) + + coretlv.session.broadcast_event.assert_called_once() + + @pytest.mark.parametrize( + "state", + [ + EventTypes.START, + EventTypes.STOP, + EventTypes.RESTART, + EventTypes.PAUSE, + EventTypes.RECONFIGURE, + ], + ) + def test_event_mobility(self, coretlv: CoreHandler, state: EventTypes): + message = coreapi.CoreEventMessage.create( + 0, [(EventTlvs.TYPE, state.value), (EventTlvs.NAME, "mobility:ns2script")] + ) + + coretlv.handle_message(message) + + def test_register_gui(self, coretlv: CoreHandler): + message = coreapi.CoreRegMessage.create(0, [(RegisterTlvs.GUI, "gui")]) + coretlv.handle_message(message) + + def test_register_xml(self, coretlv: CoreHandler, tmpdir): + xml_file = tmpdir.join("coretlv.session.xml") + file_path = xml_file.strpath + node = coretlv.session.add_node(CoreNode) + coretlv.session.save_xml(file_path) + coretlv.session.delete_node(node.id) + message = coreapi.CoreRegMessage.create( + 0, [(RegisterTlvs.EXECUTE_SERVER, file_path)] + ) + coretlv.session.instantiate() + + coretlv.handle_message(message) + + assert coretlv.coreemu.sessions[1].get_node(node.id, CoreNode) + + def test_register_python(self, coretlv: CoreHandler, tmpdir): + xml_file = tmpdir.join("test.py") + file_path = xml_file.strpath + with open(file_path, "w") as f: + f.write("from core.nodes.base import CoreNode\n") + f.write("coreemu = globals()['coreemu']\n") + f.write(f"session = coreemu.sessions[{coretlv.session.id}]\n") + f.write("session.add_node(CoreNode)\n") + message = coreapi.CoreRegMessage.create( + 0, [(RegisterTlvs.EXECUTE_SERVER, file_path)] + ) + coretlv.session.instantiate() + + coretlv.handle_message(message) + + assert len(coretlv.session.nodes) == 1 + + def test_config_all(self, coretlv: CoreHandler): + message = coreapi.CoreConfMessage.create( + MessageFlags.ADD.value, + [(ConfigTlvs.OBJECT, "all"), (ConfigTlvs.TYPE, ConfigFlags.RESET.value)], + ) + coretlv.session.location.refxyz = (10, 10, 10) + + coretlv.handle_message(message) + + assert coretlv.session.location.refxyz == (0, 0, 0) + + def test_config_options_request(self, coretlv: CoreHandler): + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "session"), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + ], + ) + coretlv.handle_broadcast_config = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.handle_broadcast_config.assert_called_once() + + def test_config_options_update(self, coretlv: CoreHandler): + test_key = "test" + test_value = "test" + values = {test_key: test_value} + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "session"), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.VALUES, dict_to_str(values)), + ], + ) + + coretlv.handle_message(message) + + assert coretlv.session.options.get_config(test_key) == test_value + + def test_config_location_reset(self, coretlv: CoreHandler): + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "location"), + (ConfigTlvs.TYPE, ConfigFlags.RESET.value), + ], + ) + coretlv.session.location.refxyz = (10, 10, 10) + + coretlv.handle_message(message) + + assert coretlv.session.location.refxyz == (0, 0, 0) + + def test_config_location_update(self, coretlv: CoreHandler): + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "location"), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.VALUES, "10|10|70|50|0|0.5"), + ], + ) + + coretlv.handle_message(message) + + assert coretlv.session.location.refxyz == (10, 10, 0.0) + assert coretlv.session.location.refgeo == (70, 50, 0) + assert coretlv.session.location.refscale == 0.5 + + def test_config_metadata_request(self, coretlv: CoreHandler): + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "metadata"), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + ], + ) + coretlv.handle_broadcast_config = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.handle_broadcast_config.assert_called_once() + + def test_config_metadata_update(self, coretlv: CoreHandler): + test_key = "test" + test_value = "test" + values = {test_key: test_value} + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "metadata"), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.VALUES, dict_to_str(values)), + ], + ) + + coretlv.handle_message(message) + + assert coretlv.session.metadata[test_key] == test_value + + def test_config_broker_request(self, coretlv: CoreHandler): + server = "test" + host = "10.0.0.1" + port = 50000 + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "broker"), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.VALUES, f"{server}:{host}:{port}"), + ], + ) + coretlv.session.distributed.add_server = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.session.distributed.add_server.assert_called_once_with(server, host) + + def test_config_services_request_all(self, coretlv: CoreHandler): + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "services"), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + ], + ) + coretlv.handle_broadcast_config = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.handle_broadcast_config.assert_called_once() + + def test_config_services_request_specific(self, coretlv: CoreHandler): + node = coretlv.session.add_node(CoreNode) + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.NODE, node.id), + (ConfigTlvs.OBJECT, "services"), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + (ConfigTlvs.OPAQUE, "service:DefaultRoute"), + ], + ) + coretlv.handle_broadcast_config = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.handle_broadcast_config.assert_called_once() + + def test_config_services_request_specific_file(self, coretlv: CoreHandler): + node = coretlv.session.add_node(CoreNode) + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.NODE, node.id), + (ConfigTlvs.OBJECT, "services"), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + (ConfigTlvs.OPAQUE, "service:DefaultRoute:defaultroute.sh"), + ], + ) + coretlv.session.broadcast_file = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.session.broadcast_file.assert_called_once() + + def test_config_services_reset(self, coretlv: CoreHandler): + node = coretlv.session.add_node(CoreNode) + service = "DefaultRoute" + coretlv.session.services.set_service(node.id, service) + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "services"), + (ConfigTlvs.TYPE, ConfigFlags.RESET.value), + ], + ) + assert coretlv.session.services.get_service(node.id, service) is not None + + coretlv.handle_message(message) + + assert coretlv.session.services.get_service(node.id, service) is None + + def test_config_services_set(self, coretlv: CoreHandler): + node = coretlv.session.add_node(CoreNode) + service = "DefaultRoute" + values = {"meta": "metadata"} + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.NODE, node.id), + (ConfigTlvs.OBJECT, "services"), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.OPAQUE, f"service:{service}"), + (ConfigTlvs.VALUES, dict_to_str(values)), + ], + ) + assert coretlv.session.services.get_service(node.id, service) is None + + coretlv.handle_message(message) + + assert coretlv.session.services.get_service(node.id, service) is not None + + def test_config_mobility_reset(self, coretlv: CoreHandler): + wlan = coretlv.session.add_node(WlanNode) + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "MobilityManager"), + (ConfigTlvs.TYPE, ConfigFlags.RESET.value), + ], + ) + coretlv.session.mobility.set_model_config(wlan.id, BasicRangeModel.name, {}) + assert len(coretlv.session.mobility.node_configurations) == 1 + + coretlv.handle_message(message) + + assert len(coretlv.session.mobility.node_configurations) == 0 + + def test_config_mobility_model_request(self, coretlv: CoreHandler): + wlan = coretlv.session.add_node(WlanNode) + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.NODE, wlan.id), + (ConfigTlvs.OBJECT, BasicRangeModel.name), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + ], + ) + coretlv.handle_broadcast_config = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.handle_broadcast_config.assert_called_once() + + def test_config_mobility_model_update(self, coretlv: CoreHandler): + wlan = coretlv.session.add_node(WlanNode) + config_key = "range" + config_value = "1000" + values = {config_key: config_value} + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.NODE, wlan.id), + (ConfigTlvs.OBJECT, BasicRangeModel.name), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.VALUES, dict_to_str(values)), + ], + ) + + coretlv.handle_message(message) + + config = coretlv.session.mobility.get_model_config( + wlan.id, BasicRangeModel.name + ) + assert config[config_key] == config_value + + def test_config_emane_model_request(self, coretlv: CoreHandler): + wlan = coretlv.session.add_node(WlanNode) + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.NODE, wlan.id), + (ConfigTlvs.OBJECT, EmaneIeee80211abgModel.name), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + ], + ) + coretlv.handle_broadcast_config = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.handle_broadcast_config.assert_called_once() + + def test_config_emane_model_update(self, coretlv: CoreHandler): + wlan = coretlv.session.add_node(WlanNode) + config_key = "distance" + config_value = "50051" + values = {config_key: config_value} + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.NODE, wlan.id), + (ConfigTlvs.OBJECT, EmaneIeee80211abgModel.name), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.VALUES, dict_to_str(values)), + ], + ) + + coretlv.handle_message(message) + + config = coretlv.session.emane.get_model_config( + wlan.id, EmaneIeee80211abgModel.name + ) + assert config[config_key] == config_value + + def test_config_emane_request(self, coretlv: CoreHandler): + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "emane"), + (ConfigTlvs.TYPE, ConfigFlags.REQUEST.value), + ], + ) + coretlv.handle_broadcast_config = mock.MagicMock() + + coretlv.handle_message(message) + + coretlv.handle_broadcast_config.assert_called_once() + + def test_config_emane_update(self, coretlv: CoreHandler): + config_key = "eventservicedevice" + config_value = "eth4" + values = {config_key: config_value} + message = coreapi.CoreConfMessage.create( + 0, + [ + (ConfigTlvs.OBJECT, "emane"), + (ConfigTlvs.TYPE, ConfigFlags.UPDATE.value), + (ConfigTlvs.VALUES, dict_to_str(values)), + ], + ) + + coretlv.handle_message(message) + + config = coretlv.session.emane.get_configs() + assert config[config_key] == config_value diff --git a/daemon/tests/test_links.py b/daemon/tests/test_links.py index eea88fb3..94c8c699 100644 --- a/daemon/tests/test_links.py +++ b/daemon/tests/test_links.py @@ -1,18 +1,10 @@ from typing import Tuple -import pytest - from core.emulator.data import IpPrefixes, LinkOptions from core.emulator.session import Session -from core.errors import CoreError from core.nodes.base import CoreNode from core.nodes.network import SwitchNode -INVALID_ID: int = 100 -LINK_OPTIONS: LinkOptions = LinkOptions( - delay=50, bandwidth=5000000, loss=25, dup=25, jitter=10, buffer=100 -) - def create_ptp_network( session: Session, ip_prefixes: IpPrefixes @@ -33,7 +25,7 @@ def create_ptp_network( class TestLinks: - def test_add_node_to_node(self, session: Session, ip_prefixes: IpPrefixes): + def test_add_ptp(self, session: Session, ip_prefixes: IpPrefixes): # given node1 = session.add_node(CoreNode) node2 = session.add_node(CoreNode) @@ -41,22 +33,11 @@ class TestLinks: iface2_data = ip_prefixes.create_iface(node2) # when - iface1, iface2 = session.add_link( - node1.id, node2.id, iface1_data, iface2_data, options=LINK_OPTIONS - ) + session.add_link(node1.id, node2.id, iface1_data, iface2_data) # then - assert len(session.link_manager.links()) == 1 assert node1.get_iface(iface1_data.id) assert node2.get_iface(iface2_data.id) - assert iface1 is not None - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert node1.get_iface(iface1_data.id) - assert iface2 is not None - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem - assert node1.get_iface(iface1_data.id) def test_add_node_to_net(self, session: Session, ip_prefixes: IpPrefixes): # given @@ -65,20 +46,11 @@ class TestLinks: iface1_data = ip_prefixes.create_iface(node1) # when - iface1, iface2 = session.add_link( - node1.id, node2.id, iface1_data=iface1_data, options=LINK_OPTIONS - ) + session.add_link(node1.id, node2.id, iface1_data=iface1_data) # then - assert len(session.link_manager.links()) == 1 - assert iface1 is not None - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem + assert node2.links() assert node1.get_iface(iface1_data.id) - assert iface2 is not None - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem - assert node2.get_iface(iface1_data.id) def test_add_net_to_node(self, session: Session, ip_prefixes: IpPrefixes): # given @@ -87,332 +59,199 @@ class TestLinks: iface2_data = ip_prefixes.create_iface(node2) # when - iface1, iface2 = session.add_link( - node1.id, node2.id, iface2_data=iface2_data, options=LINK_OPTIONS - ) + session.add_link(node1.id, node2.id, iface2_data=iface2_data) # then - assert len(session.link_manager.links()) == 1 - assert iface1 is not None - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert node1.get_iface(iface1.id) - assert iface2 is not None - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem - assert node2.get_iface(iface2.id) + assert node1.links() + assert node2.get_iface(iface2_data.id) - def test_add_net_to_net(self, session: Session): + def test_add_net_to_net(self, session): # given node1 = session.add_node(SwitchNode) node2 = session.add_node(SwitchNode) # when - iface1, iface2 = session.add_link(node1.id, node2.id, options=LINK_OPTIONS) + session.add_link(node1.id, node2.id) # then - assert len(session.link_manager.links()) == 1 - assert iface1 is not None - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert iface2 is not None - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem - - def test_add_node_to_node_uni(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(CoreNode) - node2 = session.add_node(CoreNode) - iface1_data = ip_prefixes.create_iface(node1) - iface2_data = ip_prefixes.create_iface(node2) - link_options1 = LinkOptions( - delay=50, - bandwidth=5000000, - loss=25, - dup=25, - jitter=10, - buffer=100, - unidirectional=True, - ) - link_options2 = LinkOptions( - delay=51, - bandwidth=5000001, - loss=26, - dup=26, - jitter=11, - buffer=101, - unidirectional=True, - ) - - # when - iface1, iface2 = session.add_link( - node1.id, node2.id, iface1_data, iface2_data, link_options1 - ) - session.update_link( - node2.id, node1.id, iface2_data.id, iface1_data.id, link_options2 - ) - - # then - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1_data.id) - assert node2.get_iface(iface2_data.id) - assert iface1 is not None - assert iface1.options == link_options1 - assert iface1.has_netem - assert iface2 is not None - assert iface2.options == link_options2 - assert iface2.has_netem + assert node1.links() def test_update_node_to_net(self, session: Session, ip_prefixes: IpPrefixes): # given + delay = 50 + bandwidth = 5000000 + loss = 25 + dup = 25 + jitter = 10 + buffer = 100 node1 = session.add_node(CoreNode) node2 = session.add_node(SwitchNode) iface1_data = ip_prefixes.create_iface(node1) - iface1, iface2 = session.add_link(node1.id, node2.id, iface1_data) - assert len(session.link_manager.links()) == 1 - assert iface1.options != LINK_OPTIONS - assert iface2.options != LINK_OPTIONS + session.add_link(node1.id, node2.id, iface1_data) + iface1 = node1.get_iface(iface1_data.id) + assert iface1.getparam("delay") != delay + assert iface1.getparam("bw") != bandwidth + assert iface1.getparam("loss") != loss + assert iface1.getparam("duplicate") != dup + assert iface1.getparam("jitter") != jitter + assert iface1.getparam("buffer") != buffer # when - session.update_link(node1.id, node2.id, iface1.id, iface2.id, LINK_OPTIONS) + options = LinkOptions( + delay=delay, + bandwidth=bandwidth, + loss=loss, + dup=dup, + jitter=jitter, + buffer=buffer, + ) + session.update_link( + node1.id, node2.id, iface1_id=iface1_data.id, options=options + ) # then - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem + assert iface1.getparam("delay") == delay + assert iface1.getparam("bw") == bandwidth + assert iface1.getparam("loss") == loss + assert iface1.getparam("duplicate") == dup + assert iface1.getparam("jitter") == jitter + assert iface1.getparam("buffer") == buffer def test_update_net_to_node(self, session: Session, ip_prefixes: IpPrefixes): # given + delay = 50 + bandwidth = 5000000 + loss = 25 + dup = 25 + jitter = 10 + buffer = 100 node1 = session.add_node(SwitchNode) node2 = session.add_node(CoreNode) iface2_data = ip_prefixes.create_iface(node2) - iface1, iface2 = session.add_link(node1.id, node2.id, iface2_data=iface2_data) - assert iface1.options != LINK_OPTIONS - assert iface2.options != LINK_OPTIONS + session.add_link(node1.id, node2.id, iface2_data=iface2_data) + iface2 = node2.get_iface(iface2_data.id) + assert iface2.getparam("delay") != delay + assert iface2.getparam("bw") != bandwidth + assert iface2.getparam("loss") != loss + assert iface2.getparam("duplicate") != dup + assert iface2.getparam("jitter") != jitter + assert iface2.getparam("buffer") != buffer # when - session.update_link(node1.id, node2.id, iface1.id, iface2.id, LINK_OPTIONS) + options = LinkOptions( + delay=delay, + bandwidth=bandwidth, + loss=loss, + dup=dup, + jitter=jitter, + buffer=buffer, + ) + session.update_link( + node1.id, node2.id, iface2_id=iface2_data.id, options=options + ) # then - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem + assert iface2.getparam("delay") == delay + assert iface2.getparam("bw") == bandwidth + assert iface2.getparam("loss") == loss + assert iface2.getparam("duplicate") == dup + assert iface2.getparam("jitter") == jitter + assert iface2.getparam("buffer") == buffer def test_update_ptp(self, session: Session, ip_prefixes: IpPrefixes): # given + delay = 50 + bandwidth = 5000000 + loss = 25 + dup = 25 + jitter = 10 + buffer = 100 node1 = session.add_node(CoreNode) node2 = session.add_node(CoreNode) iface1_data = ip_prefixes.create_iface(node1) iface2_data = ip_prefixes.create_iface(node2) - iface1, iface2 = session.add_link(node1.id, node2.id, iface1_data, iface2_data) - assert iface1.options != LINK_OPTIONS - assert iface2.options != LINK_OPTIONS + session.add_link(node1.id, node2.id, iface1_data, iface2_data) + iface1 = node1.get_iface(iface1_data.id) + iface2 = node2.get_iface(iface2_data.id) + assert iface1.getparam("delay") != delay + assert iface1.getparam("bw") != bandwidth + assert iface1.getparam("loss") != loss + assert iface1.getparam("duplicate") != dup + assert iface1.getparam("jitter") != jitter + assert iface1.getparam("buffer") != buffer + assert iface2.getparam("delay") != delay + assert iface2.getparam("bw") != bandwidth + assert iface2.getparam("loss") != loss + assert iface2.getparam("duplicate") != dup + assert iface2.getparam("jitter") != jitter + assert iface2.getparam("buffer") != buffer # when - session.update_link(node1.id, node2.id, iface1.id, iface2.id, LINK_OPTIONS) + options = LinkOptions( + delay=delay, + bandwidth=bandwidth, + loss=loss, + dup=dup, + jitter=jitter, + buffer=buffer, + ) + session.update_link(node1.id, node2.id, iface1_data.id, iface2_data.id, options) # then - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem + assert iface1.getparam("delay") == delay + assert iface1.getparam("bw") == bandwidth + assert iface1.getparam("loss") == loss + assert iface1.getparam("duplicate") == dup + assert iface1.getparam("jitter") == jitter + assert iface1.getparam("buffer") == buffer + assert iface2.getparam("delay") == delay + assert iface2.getparam("bw") == bandwidth + assert iface2.getparam("loss") == loss + assert iface2.getparam("duplicate") == dup + assert iface2.getparam("jitter") == jitter + assert iface2.getparam("buffer") == buffer - def test_update_net_to_net(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(SwitchNode) - node2 = session.add_node(SwitchNode) - iface1, iface2 = session.add_link(node1.id, node2.id) - assert iface1.options != LINK_OPTIONS - assert iface2.options != LINK_OPTIONS - - # when - session.update_link(node1.id, node2.id, iface1.id, iface2.id, LINK_OPTIONS) - - # then - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem - - def test_update_error(self, session: Session, ip_prefixes: IpPrefixes): + def test_delete_ptp(self, session: Session, ip_prefixes: IpPrefixes): # given node1 = session.add_node(CoreNode) node2 = session.add_node(CoreNode) iface1_data = ip_prefixes.create_iface(node1) iface2_data = ip_prefixes.create_iface(node2) - iface1, iface2 = session.add_link(node1.id, node2.id, iface1_data, iface2_data) - assert iface1.options != LINK_OPTIONS - assert iface2.options != LINK_OPTIONS + session.add_link(node1.id, node2.id, iface1_data, iface2_data) + assert node1.get_iface(iface1_data.id) + assert node2.get_iface(iface2_data.id) # when - with pytest.raises(CoreError): - session.delete_link(node1.id, INVALID_ID, iface1.id, iface2.id) - - def test_clear_net_to_net(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(SwitchNode) - node2 = session.add_node(SwitchNode) - iface1, iface2 = session.add_link(node1.id, node2.id, options=LINK_OPTIONS) - assert iface1.options == LINK_OPTIONS - assert iface1.has_netem - assert iface2.options == LINK_OPTIONS - assert iface2.has_netem - - # when - options = LinkOptions(delay=0, bandwidth=0, loss=0.0, dup=0, jitter=0, buffer=0) - session.update_link(node1.id, node2.id, iface1.id, iface2.id, options) + session.delete_link(node1.id, node2.id, iface1_data.id, iface2_data.id) # then - assert iface1.options.is_clear() - assert not iface1.has_netem - assert iface2.options.is_clear() - assert not iface2.has_netem - - def test_delete_node_to_node(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(CoreNode) - node2 = session.add_node(CoreNode) - iface1_data = ip_prefixes.create_iface(node1) - iface2_data = ip_prefixes.create_iface(node2) - iface1, iface2 = session.add_link(node1.id, node2.id, iface1_data, iface2_data) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) - - # when - session.delete_link(node1.id, node2.id, iface1.id, iface2.id) - - # then - assert len(session.link_manager.links()) == 0 - assert iface1.id not in node1.ifaces - assert iface2.id not in node2.ifaces + assert iface1_data.id not in node1.ifaces + assert iface2_data.id not in node2.ifaces def test_delete_node_to_net(self, session: Session, ip_prefixes: IpPrefixes): # given node1 = session.add_node(CoreNode) node2 = session.add_node(SwitchNode) iface1_data = ip_prefixes.create_iface(node1) - iface1, iface2 = session.add_link(node1.id, node2.id, iface1_data) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) + session.add_link(node1.id, node2.id, iface1_data) + assert node1.get_iface(iface1_data.id) # when - session.delete_link(node1.id, node2.id, iface1.id, iface2.id) + session.delete_link(node1.id, node2.id, iface1_id=iface1_data.id) # then - assert len(session.link_manager.links()) == 0 - assert iface1.id not in node1.ifaces - assert iface2.id not in node2.ifaces + assert iface1_data.id not in node1.ifaces def test_delete_net_to_node(self, session: Session, ip_prefixes: IpPrefixes): # given node1 = session.add_node(SwitchNode) node2 = session.add_node(CoreNode) iface2_data = ip_prefixes.create_iface(node2) - iface1, iface2 = session.add_link(node1.id, node2.id, iface2_data=iface2_data) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) + session.add_link(node1.id, node2.id, iface2_data=iface2_data) + assert node2.get_iface(iface2_data.id) # when - session.delete_link(node1.id, node2.id, iface1.id, iface2.id) + session.delete_link(node1.id, node2.id, iface2_id=iface2_data.id) # then - assert len(session.link_manager.links()) == 0 - assert iface1.id not in node1.ifaces - assert iface2.id not in node2.ifaces - - def test_delete_net_to_net(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(SwitchNode) - node2 = session.add_node(SwitchNode) - iface1, iface2 = session.add_link(node1.id, node2.id) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) - - # when - session.delete_link(node1.id, node2.id, iface1.id, iface2.id) - - # then - assert len(session.link_manager.links()) == 0 - assert iface1.id not in node1.ifaces - assert iface2.id not in node2.ifaces - - def test_delete_node_error(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(SwitchNode) - node2 = session.add_node(SwitchNode) - iface1, iface2 = session.add_link(node1.id, node2.id) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) - - # when - with pytest.raises(CoreError): - session.delete_link(node1.id, INVALID_ID, iface1.id, iface2.id) - with pytest.raises(CoreError): - session.delete_link(INVALID_ID, node2.id, iface1.id, iface2.id) - - def test_delete_net_to_net_error(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(SwitchNode) - node2 = session.add_node(SwitchNode) - node3 = session.add_node(SwitchNode) - iface1, iface2 = session.add_link(node1.id, node2.id) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) - - # when - with pytest.raises(CoreError): - session.delete_link(node1.id, node3.id, iface1.id, iface2.id) - - def test_delete_node_to_net_error(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(CoreNode) - node2 = session.add_node(SwitchNode) - node3 = session.add_node(SwitchNode) - iface1_data = ip_prefixes.create_iface(node1) - iface1, iface2 = session.add_link(node1.id, node2.id, iface1_data) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) - - # when - with pytest.raises(CoreError): - session.delete_link(node1.id, node3.id, iface1.id, iface2.id) - - def test_delete_net_to_node_error(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(SwitchNode) - node2 = session.add_node(CoreNode) - node3 = session.add_node(SwitchNode) - iface2_data = ip_prefixes.create_iface(node2) - iface1, iface2 = session.add_link(node1.id, node2.id, iface2_data=iface2_data) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) - - # when - with pytest.raises(CoreError): - session.delete_link(node1.id, node3.id, iface1.id, iface2.id) - - def test_delete_node_to_node_error(self, session: Session, ip_prefixes: IpPrefixes): - # given - node1 = session.add_node(CoreNode) - node2 = session.add_node(CoreNode) - node3 = session.add_node(SwitchNode) - iface1_data = ip_prefixes.create_iface(node1) - iface2_data = ip_prefixes.create_iface(node2) - iface1, iface2 = session.add_link(node1.id, node2.id, iface1_data, iface2_data) - assert len(session.link_manager.links()) == 1 - assert node1.get_iface(iface1.id) - assert node2.get_iface(iface2.id) - - # when - with pytest.raises(CoreError): - session.delete_link(node1.id, node3.id, iface1.id, iface2.id) + assert iface2_data.id not in node2.ifaces diff --git a/daemon/tests/test_nodes.py b/daemon/tests/test_nodes.py index bb76bb4e..8ed21f27 100644 --- a/daemon/tests/test_nodes.py +++ b/daemon/tests/test_nodes.py @@ -1,6 +1,6 @@ import pytest -from core.emulator.data import InterfaceData +from core.emulator.data import InterfaceData, NodeOptions from core.emulator.session import Session from core.errors import CoreError from core.nodes.base import CoreNode @@ -14,8 +14,7 @@ class TestNodes: @pytest.mark.parametrize("model", MODELS) def test_node_add(self, session: Session, model: str): # given - options = CoreNode.create_options() - options.model = model + options = NodeOptions(model=model) # when node = session.add_node(CoreNode, options=options) @@ -25,30 +24,19 @@ class TestNodes: assert node.alive() assert node.up - def test_node_set_pos(self, session: Session): + def test_node_update(self, session: Session): # given node = session.add_node(CoreNode) - x, y = 100.0, 50.0 + position_value = 100 + update_options = NodeOptions() + update_options.set_position(x=position_value, y=position_value) # when - session.set_node_pos(node, x, y) + session.edit_node(node.id, update_options) # then - assert node.position.x == x - assert node.position.y == y - - def test_node_set_geo(self, session: Session): - # given - node = session.add_node(CoreNode) - lon, lat, alt = 0.0, 0.0, 0.0 - - # when - session.set_node_geo(node, lon, lat, alt) - - # then - assert node.position.lon == lon - assert node.position.lat == lat - assert node.position.alt == alt + assert node.position.x == position_value + assert node.position.y == position_value def test_node_delete(self, session: Session): # given @@ -61,40 +49,6 @@ class TestNodes: with pytest.raises(CoreError): session.get_node(node.id, CoreNode) - def test_node_add_iface(self, session: Session): - # given - node = session.add_node(CoreNode) - - # when - iface = node.create_iface() - - # then - assert iface.id in node.ifaces - - def test_node_get_iface(self, session: Session): - # given - node = session.add_node(CoreNode) - iface = node.create_iface() - assert iface.id in node.ifaces - - # when - iface2 = node.get_iface(iface.id) - - # then - assert iface == iface2 - - def test_node_delete_iface(self, session: Session): - # given - node = session.add_node(CoreNode) - iface = node.create_iface() - assert iface.id in node.ifaces - - # when - node.delete_iface(iface.id) - - # then - assert iface.id not in node.ifaces - @pytest.mark.parametrize( "mac,expected", [ @@ -105,11 +59,12 @@ class TestNodes: def test_node_set_mac(self, session: Session, mac: str, expected: str): # given node = session.add_node(CoreNode) + switch = session.add_node(SwitchNode) iface_data = InterfaceData() - iface = node.create_iface(iface_data) + iface = node.new_iface(switch, iface_data) # when - iface.set_mac(mac) + node.set_mac(iface.node_id, mac) # then assert str(iface.mac) == expected @@ -120,12 +75,13 @@ class TestNodes: def test_node_set_mac_exception(self, session: Session, mac: str): # given node = session.add_node(CoreNode) + switch = session.add_node(SwitchNode) iface_data = InterfaceData() - iface = node.create_iface(iface_data) + iface = node.new_iface(switch, iface_data) # when with pytest.raises(CoreError): - iface.set_mac(mac) + node.set_mac(iface.node_id, mac) @pytest.mark.parametrize( "ip,expected,is_ip6", @@ -139,11 +95,12 @@ class TestNodes: def test_node_add_ip(self, session: Session, ip: str, expected: str, is_ip6: bool): # given node = session.add_node(CoreNode) + switch = session.add_node(SwitchNode) iface_data = InterfaceData() - iface = node.create_iface(iface_data) + iface = node.new_iface(switch, iface_data) # when - iface.add_ip(ip) + node.add_ip(iface.node_id, ip) # then if is_ip6: @@ -154,13 +111,14 @@ class TestNodes: def test_node_add_ip_exception(self, session): # given node = session.add_node(CoreNode) + switch = session.add_node(SwitchNode) iface_data = InterfaceData() - iface = node.create_iface(iface_data) + iface = node.new_iface(switch, iface_data) ip = "256.168.0.1/24" # when with pytest.raises(CoreError): - iface.add_ip(ip) + node.add_ip(iface.node_id, ip) @pytest.mark.parametrize("net_type", NET_TYPES) def test_net(self, session, net_type): diff --git a/daemon/tests/test_services.py b/daemon/tests/test_services.py index 69234e3a..44776ea2 100644 --- a/daemon/tests/test_services.py +++ b/daemon/tests/test_services.py @@ -1,5 +1,5 @@ import itertools -from pathlib import Path +import os import pytest from mock import MagicMock @@ -9,8 +9,8 @@ from core.errors import CoreCommandError from core.nodes.base import CoreNode from core.services.coreservices import CoreService, ServiceDependencies, ServiceManager -_PATH: Path = Path(__file__).resolve().parent -_SERVICES_PATH = _PATH / "myservices" +_PATH = os.path.abspath(os.path.dirname(__file__)) +_SERVICES_PATH = os.path.join(_PATH, "myservices") SERVICE_ONE = "MyService" SERVICE_TWO = "MyService2" @@ -53,7 +53,7 @@ class TestServices: total_service = len(node.services) # when - session.services.add_services(node, node.model, [SERVICE_ONE, SERVICE_TWO]) + session.services.add_services(node, node.type, [SERVICE_ONE, SERVICE_TWO]) # then assert node.services @@ -64,15 +64,15 @@ class TestServices: ServiceManager.add_services(_SERVICES_PATH) my_service = ServiceManager.get(SERVICE_ONE) node = session.add_node(CoreNode) - file_path = Path(my_service.configs[0]) - file_path = node.host_path(file_path) + file_name = my_service.configs[0] + file_path = node.hostfilename(file_name) # when session.services.create_service_files(node, my_service) # then if not request.config.getoption("mock"): - assert file_path.exists() + assert os.path.exists(file_path) def test_service_validate(self, session: Session): # given diff --git a/daemon/tests/test_utils.py b/daemon/tests/test_utils.py index 21d092ac..5a4f25a4 100644 --- a/daemon/tests/test_utils.py +++ b/daemon/tests/test_utils.py @@ -9,7 +9,7 @@ class TestUtils: no_args = "()" one_arg = "('one',)" two_args = "('one', 'two')" - unicode_args = "('one', 'two', 'three')" + unicode_args = u"('one', 'two', 'three')" # when no_args = utils.make_tuple_fromstr(no_args, str) diff --git a/daemon/tests/test_xml.py b/daemon/tests/test_xml.py index 6841da8e..8a6e465d 100644 --- a/daemon/tests/test_xml.py +++ b/daemon/tests/test_xml.py @@ -1,16 +1,15 @@ -from pathlib import Path from tempfile import TemporaryFile from xml.etree import ElementTree import pytest -from core.emulator.data import IpPrefixes, LinkOptions +from core.emulator.data import IpPrefixes, LinkOptions, NodeOptions from core.emulator.enumerations import EventTypes from core.emulator.session import Session from core.errors import CoreError from core.location.mobility import BasicRangeModel from core.nodes.base import CoreNode -from core.nodes.network import SwitchNode, WlanNode +from core.nodes.network import PtpNet, SwitchNode, WlanNode from core.services.utility import SshService @@ -35,7 +34,7 @@ class TestXml: # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -65,21 +64,28 @@ class TestXml: :param tmpdir: tmpdir to create data in :param ip_prefixes: generates ip addresses for nodes """ + # create ptp + ptp_node = session.add_node(PtpNet) + # create nodes node1 = session.add_node(CoreNode) node2 = session.add_node(CoreNode) - # link nodes - iface1_data = ip_prefixes.create_iface(node1) - iface2_data = ip_prefixes.create_iface(node2) - session.add_link(node1.id, node2.id, iface1_data, iface2_data) + # link nodes to ptp net + for node in [node1, node2]: + iface_data = ip_prefixes.create_iface(node) + session.add_link(node.id, ptp_node.id, iface1_data=iface_data) # instantiate session session.instantiate() + # get ids for nodes + node1_id = node1.id + node2_id = node2.id + # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -91,19 +97,16 @@ class TestXml: # verify nodes have been removed from session with pytest.raises(CoreError): - assert not session.get_node(node1.id, CoreNode) + assert not session.get_node(node1_id, CoreNode) with pytest.raises(CoreError): - assert not session.get_node(node2.id, CoreNode) - # verify no links are known - assert len(session.link_manager.links()) == 0 + assert not session.get_node(node2_id, CoreNode) # load saved xml session.open_xml(file_path, start=True) # verify nodes have been recreated - assert session.get_node(node1.id, CoreNode) - assert session.get_node(node2.id, CoreNode) - assert len(session.link_manager.links()) == 1 + assert session.get_node(node1_id, CoreNode) + assert session.get_node(node2_id, CoreNode) def test_xml_ptp_services( self, session: Session, tmpdir: TemporaryFile, ip_prefixes: IpPrefixes @@ -115,14 +118,18 @@ class TestXml: :param tmpdir: tmpdir to create data in :param ip_prefixes: generates ip addresses for nodes """ + # create ptp + ptp_node = session.add_node(PtpNet) + # create nodes - node1 = session.add_node(CoreNode) + options = NodeOptions(model="host") + node1 = session.add_node(CoreNode, options=options) node2 = session.add_node(CoreNode) # link nodes to ptp net - iface1_data = ip_prefixes.create_iface(node1) - iface2_data = ip_prefixes.create_iface(node2) - session.add_link(node1.id, node2.id, iface1_data, iface2_data) + for node in [node1, node2]: + iface_data = ip_prefixes.create_iface(node) + session.add_link(node.id, ptp_node.id, iface1_data=iface_data) # set custom values for node service session.services.set_service(node1.id, SshService.name) @@ -135,9 +142,13 @@ class TestXml: # instantiate session session.instantiate() + # get ids for nodes + node1_id = node1.id + node2_id = node2.id + # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -149,9 +160,9 @@ class TestXml: # verify nodes have been removed from session with pytest.raises(CoreError): - assert not session.get_node(node1.id, CoreNode) + assert not session.get_node(node1_id, CoreNode) with pytest.raises(CoreError): - assert not session.get_node(node2.id, CoreNode) + assert not session.get_node(node2_id, CoreNode) # load saved xml session.open_xml(file_path, start=True) @@ -160,8 +171,8 @@ class TestXml: service = session.services.get_service(node1.id, SshService.name) # verify nodes have been recreated - assert session.get_node(node1.id, CoreNode) - assert session.get_node(node2.id, CoreNode) + assert session.get_node(node1_id, CoreNode) + assert session.get_node(node2_id, CoreNode) assert service.config_data.get(service_file) == file_data def test_xml_mobility( @@ -175,26 +186,31 @@ class TestXml: :param ip_prefixes: generates ip addresses for nodes """ # create wlan - wlan = session.add_node(WlanNode) - session.mobility.set_model(wlan, BasicRangeModel, {"test": "1"}) + wlan_node = session.add_node(WlanNode) + session.mobility.set_model(wlan_node, BasicRangeModel, {"test": "1"}) # create nodes - options = CoreNode.create_options() - options.model = "mdr" + options = NodeOptions(model="mdr") + options.set_position(0, 0) node1 = session.add_node(CoreNode, options=options) node2 = session.add_node(CoreNode, options=options) # link nodes for node in [node1, node2]: iface_data = ip_prefixes.create_iface(node) - session.add_link(node.id, wlan.id, iface1_data=iface_data) + session.add_link(node.id, wlan_node.id, iface1_data=iface_data) # instantiate session session.instantiate() + # get ids for nodes + wlan_id = wlan_node.id + node1_id = node1.id + node2_id = node2.id + # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -206,20 +222,20 @@ class TestXml: # verify nodes have been removed from session with pytest.raises(CoreError): - assert not session.get_node(node1.id, CoreNode) + assert not session.get_node(node1_id, CoreNode) with pytest.raises(CoreError): - assert not session.get_node(node2.id, CoreNode) + assert not session.get_node(node2_id, CoreNode) # load saved xml session.open_xml(file_path, start=True) # retrieve configuration we set originally - value = str(session.mobility.get_config("test", wlan.id, BasicRangeModel.name)) + value = str(session.mobility.get_config("test", wlan_id, BasicRangeModel.name)) # verify nodes and configuration were restored - assert session.get_node(node1.id, CoreNode) - assert session.get_node(node2.id, CoreNode) - assert session.get_node(wlan.id, WlanNode) + assert session.get_node(node1_id, CoreNode) + assert session.get_node(node2_id, CoreNode) + assert session.get_node(wlan_id, WlanNode) assert value == "1" def test_network_to_network(self, session: Session, tmpdir: TemporaryFile): @@ -239,9 +255,13 @@ class TestXml: # instantiate session session.instantiate() + # get ids for nodes + node1_id = switch1.id + node2_id = switch2.id + # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -253,19 +273,19 @@ class TestXml: # verify nodes have been removed from session with pytest.raises(CoreError): - assert not session.get_node(switch1.id, SwitchNode) + assert not session.get_node(node1_id, SwitchNode) with pytest.raises(CoreError): - assert not session.get_node(switch2.id, SwitchNode) + assert not session.get_node(node2_id, SwitchNode) # load saved xml session.open_xml(file_path, start=True) # verify nodes have been recreated - switch1 = session.get_node(switch1.id, SwitchNode) - switch2 = session.get_node(switch2.id, SwitchNode) + switch1 = session.get_node(node1_id, SwitchNode) + switch2 = session.get_node(node2_id, SwitchNode) assert switch1 assert switch2 - assert len(session.link_manager.links()) == 1 + assert len(switch1.links() + switch2.links()) == 1 def test_link_options( self, session: Session, tmpdir: TemporaryFile, ip_prefixes: IpPrefixes @@ -295,9 +315,13 @@ class TestXml: # instantiate session session.instantiate() + # get ids for nodes + node1_id = node1.id + node2_id = switch.id + # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -309,25 +333,27 @@ class TestXml: # verify nodes have been removed from session with pytest.raises(CoreError): - assert not session.get_node(node1.id, CoreNode) + assert not session.get_node(node1_id, CoreNode) with pytest.raises(CoreError): - assert not session.get_node(switch.id, SwitchNode) + assert not session.get_node(node2_id, SwitchNode) # load saved xml session.open_xml(file_path, start=True) # verify nodes have been recreated - assert session.get_node(node1.id, CoreNode) - assert session.get_node(switch.id, SwitchNode) - assert len(session.link_manager.links()) == 1 - link = list(session.link_manager.links())[0] - link_options = link.options() - assert options.loss == link_options.loss - assert options.bandwidth == link_options.bandwidth - assert options.jitter == link_options.jitter - assert options.delay == link_options.delay - assert options.dup == link_options.dup - assert options.buffer == link_options.buffer + assert session.get_node(node1_id, CoreNode) + assert session.get_node(node2_id, SwitchNode) + links = [] + for node_id in session.nodes: + node = session.nodes[node_id] + links += node.links() + link = links[0] + assert options.loss == link.options.loss + assert options.bandwidth == link.options.bandwidth + assert options.jitter == link.options.jitter + assert options.delay == link.options.delay + assert options.dup == link.options.dup + assert options.buffer == link.options.buffer def test_link_options_ptp( self, session: Session, tmpdir: TemporaryFile, ip_prefixes: IpPrefixes @@ -358,9 +384,13 @@ class TestXml: # instantiate session session.instantiate() + # get ids for nodes + node1_id = node1.id + node2_id = node2.id + # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -372,25 +402,27 @@ class TestXml: # verify nodes have been removed from session with pytest.raises(CoreError): - assert not session.get_node(node1.id, CoreNode) + assert not session.get_node(node1_id, CoreNode) with pytest.raises(CoreError): - assert not session.get_node(node2.id, CoreNode) + assert not session.get_node(node2_id, CoreNode) # load saved xml session.open_xml(file_path, start=True) # verify nodes have been recreated - assert session.get_node(node1.id, CoreNode) - assert session.get_node(node2.id, CoreNode) - assert len(session.link_manager.links()) == 1 - link = list(session.link_manager.links())[0] - link_options = link.options() - assert options.loss == link_options.loss - assert options.bandwidth == link_options.bandwidth - assert options.jitter == link_options.jitter - assert options.delay == link_options.delay - assert options.dup == link_options.dup - assert options.buffer == link_options.buffer + assert session.get_node(node1_id, CoreNode) + assert session.get_node(node2_id, CoreNode) + links = [] + for node_id in session.nodes: + node = session.nodes[node_id] + links += node.links() + link = links[0] + assert options.loss == link.options.loss + assert options.bandwidth == link.options.bandwidth + assert options.jitter == link.options.jitter + assert options.delay == link.options.delay + assert options.dup == link.options.dup + assert options.buffer == link.options.buffer def test_link_options_bidirectional( self, session: Session, tmpdir: TemporaryFile, ip_prefixes: IpPrefixes @@ -417,9 +449,7 @@ class TestXml: options1.dup = 5 options1.jitter = 5 options1.buffer = 50 - iface1, iface2 = session.add_link( - node1.id, node2.id, iface1_data, iface2_data, options1 - ) + session.add_link(node1.id, node2.id, iface1_data, iface2_data, options1) options2 = LinkOptions() options2.unidirectional = 1 options2.bandwidth = 10000 @@ -428,14 +458,20 @@ class TestXml: options2.dup = 10 options2.jitter = 10 options2.buffer = 100 - session.update_link(node2.id, node1.id, iface2.id, iface1.id, options2) + session.update_link( + node2.id, node1.id, iface2_data.id, iface1_data.id, options2 + ) # instantiate session session.instantiate() + # get ids for nodes + node1_id = node1.id + node2_id = node2.id + # save xml xml_file = tmpdir.join("session.xml") - file_path = Path(xml_file.strpath) + file_path = xml_file.strpath session.save_xml(file_path) # verify xml file was created and can be parsed @@ -447,26 +483,32 @@ class TestXml: # verify nodes have been removed from session with pytest.raises(CoreError): - assert not session.get_node(node1.id, CoreNode) + assert not session.get_node(node1_id, CoreNode) with pytest.raises(CoreError): - assert not session.get_node(node2.id, CoreNode) + assert not session.get_node(node2_id, CoreNode) # load saved xml session.open_xml(file_path, start=True) # verify nodes have been recreated - assert session.get_node(node1.id, CoreNode) - assert session.get_node(node2.id, CoreNode) - assert len(session.link_manager.links()) == 1 - assert options1.bandwidth == iface1.options.bandwidth - assert options1.delay == iface1.options.delay - assert options1.loss == iface1.options.loss - assert options1.dup == iface1.options.dup - assert options1.jitter == iface1.options.jitter - assert options1.buffer == iface1.options.buffer - assert options2.bandwidth == iface2.options.bandwidth - assert options2.delay == iface2.options.delay - assert options2.loss == iface2.options.loss - assert options2.dup == iface2.options.dup - assert options2.jitter == iface2.options.jitter - assert options2.buffer == iface2.options.buffer + assert session.get_node(node1_id, CoreNode) + assert session.get_node(node2_id, CoreNode) + links = [] + for node_id in session.nodes: + node = session.nodes[node_id] + links += node.links() + assert len(links) == 2 + link1 = links[0] + link2 = links[1] + assert options1.bandwidth == link1.options.bandwidth + assert options1.delay == link1.options.delay + assert options1.loss == link1.options.loss + assert options1.dup == link1.options.dup + assert options1.jitter == link1.options.jitter + assert options1.buffer == link1.options.buffer + assert options2.bandwidth == link2.options.bandwidth + assert options2.delay == link2.options.delay + assert options2.loss == link2.options.loss + assert options2.dup == link2.options.dup + assert options2.jitter == link2.options.jitter + assert options2.buffer == link2.options.buffer diff --git a/dockerfiles/Dockerfile.centos b/dockerfiles/Dockerfile.centos deleted file mode 100644 index 06654486..00000000 --- a/dockerfiles/Dockerfile.centos +++ /dev/null @@ -1,78 +0,0 @@ -# syntax=docker/dockerfile:1 -FROM centos:7 -LABEL Description="CORE Docker CentOS Image" - -ARG PREFIX=/usr -ARG BRANCH=master -ENV LANG en_US.UTF-8 -ARG PROTOC_VERSION=3.19.6 -ARG VENV_PATH=/opt/core/venv -ENV PATH="$PATH:${VENV_PATH}/bin" -WORKDIR /opt - -# install system dependencies -RUN yum -y update && \ - yum install -y \ - xterm \ - git \ - sudo \ - wget \ - tzdata \ - unzip \ - libpcap-devel \ - libpcre3-devel \ - libxml2-devel \ - protobuf-devel \ - unzip \ - uuid-devel \ - tcpdump \ - make && \ - yum-builddep -y python3 && \ - yum autoremove -y && \ - yum install -y hostname - -# install python3.9 -RUN wget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz && \ - tar xf Python-3.9.15.tgz && \ - cd Python-3.9.15 && \ - ./configure --enable-optimizations --with-ensurepip=install && \ - make -j$(nproc) altinstall && \ - python3.9 -m pip install --upgrade pip && \ - cd /opt && \ - rm -rf Python-3.9.15 - -# install core -RUN git clone https://github.com/coreemu/core && \ - cd core && \ - git checkout ${BRANCH} && \ - NO_SYSTEM=1 PYTHON=/usr/local/bin/python3.9 ./setup.sh && \ - PATH=/root/.local/bin:$PATH PYTHON=/usr/local/bin/python3.9 inv install -v -p ${PREFIX} --no-python - -# install emane -RUN wget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz && \ - tar xf emane-1.3.3-release-1.el7.x86_64.tar.gz && \ - cd emane-1.3.3-release-1/rpms/el7/x86_64 && \ - yum install -y epel-release && \ - yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm && \ - cd ../../../.. && \ - rm emane-1.3.3-release-1.el7.x86_64.tar.gz && \ - rm -rf emane-1.3.3-release-1 - -# install emane python bindings -RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-linux-x86_64.zip && \ - mkdir protoc && \ - unzip protoc-${PROTOC_VERSION}-linux-x86_64.zip -d protoc && \ - git clone https://github.com/adjacentlink/emane.git && \ - cd emane && \ - git checkout v1.3.3 && \ - ./autogen.sh && \ - PYTHON=${VENV_PATH}/bin/python ./configure --prefix=/usr && \ - cd src/python && \ - PATH=/opt/protoc/bin:$PATH make && \ - ${VENV_PATH}/bin/python -m pip install . && \ - cd /opt && \ - rm -rf protoc && \ - rm -rf emane && \ - rm -f protoc-${PROTOC_VERSION}-linux-x86_64.zip - -WORKDIR /root diff --git a/dockerfiles/Dockerfile.centos-package b/dockerfiles/Dockerfile.centos-package deleted file mode 100644 index 8d4a1296..00000000 --- a/dockerfiles/Dockerfile.centos-package +++ /dev/null @@ -1,89 +0,0 @@ -# syntax=docker/dockerfile:1 -FROM centos:7 -LABEL Description="CORE CentOS Image" - -ENV LANG en_US.UTF-8 -ARG PROTOC_VERSION=3.19.6 -ARG VENV_PATH=/opt/core/venv -ENV PATH="$PATH:${VENV_PATH}/bin" -WORKDIR /opt - -# install basic dependencies -RUN yum -y update && \ - yum install -y \ - xterm \ - git \ - sudo \ - wget \ - tzdata \ - unzip \ - libpcap-devel \ - libpcre3-devel \ - libxml2-devel \ - protobuf-devel \ - unzip \ - uuid-devel \ - tcpdump \ - automake \ - gawk \ - libreadline-devel \ - libtool \ - pkg-config \ - make && \ - yum-builddep -y python3 && \ - yum autoremove -y && \ - yum install -y hostname - -# install python3.9 -RUN wget https://www.python.org/ftp/python/3.9.15/Python-3.9.15.tgz && \ - tar xf Python-3.9.15.tgz && \ - cd Python-3.9.15 && \ - ./configure --enable-optimizations --with-ensurepip=install && \ - make -j$(nproc) altinstall && \ - python3.9 -m pip install --upgrade pip && \ - cd /opt && \ - rm -rf Python-3.9.15 - -# install core -COPY core_*.rpm . -RUN PYTHON=/usr/local/bin/python3.9 yum install -y ./core_*.rpm && \ - rm -f core_*.rpm - -# install ospf mdr -RUN git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git && \ - cd ospf-mdr && \ - ./bootstrap.sh && \ - ./configure --disable-doc --enable-user=root --enable-group=root \ - --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \ - --localstatedir=/var/run/quagga && \ - make -j$(nproc) && \ - make install && \ - cd /opt && \ - rm -rf ospf-mdr - - # install emane -RUN wget -q https://adjacentlink.com/downloads/emane/emane-1.3.3-release-1.el7.x86_64.tar.gz && \ - tar xf emane-1.3.3-release-1.el7.x86_64.tar.gz && \ - cd emane-1.3.3-release-1/rpms/el7/x86_64 && \ - yum install -y epel-release && \ - yum install -y ./openstatistic*.rpm ./emane*.rpm ./python3-emane_*.rpm && \ - cd ../../../.. && \ - rm emane-1.3.3-release-1.el7.x86_64.tar.gz && \ - rm -rf emane-1.3.3-release-1 - -# install emane python bindings -RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-linux-x86_64.zip && \ - mkdir protoc && \ - unzip protoc-${PROTOC_VERSION}-linux-x86_64.zip -d protoc && \ - git clone https://github.com/adjacentlink/emane.git && \ - cd emane && \ - git checkout v1.3.3 && \ - ./autogen.sh && \ - PYTHON=${VENV_PATH}/bin/python ./configure --prefix=/usr && \ - cd src/python && \ - PATH=/opt/protoc/bin:$PATH make && \ - ${VENV_PATH}/bin/python -m pip install . && \ - cd /opt && \ - rm -rf protoc && \ - rm -rf emane && \ - rm -f protoc-${PROTOC_VERSION}-linux-x86_64.zip diff --git a/dockerfiles/Dockerfile.ubuntu-package b/dockerfiles/Dockerfile.ubuntu-package deleted file mode 100644 index b8f66165..00000000 --- a/dockerfiles/Dockerfile.ubuntu-package +++ /dev/null @@ -1,75 +0,0 @@ -# syntax=docker/dockerfile:1 -FROM ubuntu:22.04 -LABEL Description="CORE Docker Ubuntu Image" - -ENV DEBIAN_FRONTEND=noninteractive -ARG PROTOC_VERSION=3.19.6 -ARG VENV_PATH=/opt/core/venv -ENV PATH="$PATH:${VENV_PATH}/bin" -WORKDIR /opt - -# install basic dependencies -RUN apt-get update -y && \ - apt-get install -y --no-install-recommends \ - ca-certificates \ - python3 \ - python3-tk \ - python3-pip \ - python3-venv \ - libpcap-dev \ - libpcre3-dev \ - libprotobuf-dev \ - libxml2-dev \ - protobuf-compiler \ - unzip \ - uuid-dev \ - automake \ - gawk \ - git \ - wget \ - libreadline-dev \ - libtool \ - pkg-config \ - g++ \ - make \ - iputils-ping \ - tcpdump && \ - apt-get autoremove -y - -# install core -COPY core_*.deb . -RUN apt-get install -y ./core_*.deb && \ - rm -f core_*.deb - -# install ospf mdr -RUN git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git && \ - cd ospf-mdr && \ - ./bootstrap.sh && \ - ./configure --disable-doc --enable-user=root --enable-group=root \ - --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \ - --localstatedir=/var/run/quagga && \ - make -j$(nproc) && \ - make install && \ - cd /opt && \ - rm -rf ospf-mdr - -# install emane -RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-linux-x86_64.zip && \ - mkdir protoc && \ - unzip protoc-${PROTOC_VERSION}-linux-x86_64.zip -d protoc && \ - git clone https://github.com/adjacentlink/emane.git && \ - cd emane && \ - ./autogen.sh && \ - ./configure --prefix=/usr && \ - make -j$(nproc) && \ - make install && \ - cd src/python && \ - make clean && \ - PATH=/opt/protoc/bin:$PATH make && \ - ${VENV_PATH}/bin/python -m pip install . && \ - cd /opt && \ - rm -rf protoc && \ - rm -rf emane && \ - rm -f protoc-${PROTOC_VERSION}-linux-x86_64.zip - -WORKDIR /root diff --git a/docs/architecture.md b/docs/architecture.md index b9c5c91c..bc0c628b 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -1,22 +1,28 @@ # CORE Architecture +* Table of Contents +{:toc} + ## Main Components * core-daemon - * Manages emulated sessions of nodes and links for a given network - * Nodes are created using Linux namespaces - * Links are created using Linux bridges and virtual ethernet peers - * Packets sent over links are manipulated using traffic control - * Provides gRPC API + * Manages emulated sessions of nodes and links for a given network + * Nodes are created using Linux namespaces + * Links are created using Linux bridges and virtual ethernet peers + * Packets sent over links are manipulated using traffic control + * Controlled via the CORE GUI + * Provides both a custo TLV API and gRPC API + * Python program that leverages a small C binary for node creation * core-gui - * GUI and daemon communicate over gRPC API - * Drag and drop creation for nodes and links - * Can launch terminals for emulated nodes in running sessions - * Can save/open scenario files to recreate previous sessions -* vnoded - * Command line utility for creating CORE node namespaces + * GUI and daemon communicate over the custom TLV API + * Drag and drop creation for nodes and links + * Can launch terminals for emulated nodes in running sessions + * Can save/open scenario files to recreate previous sessions + * TCL/TK program +* coresendmsg + * Command line utility for sending TLV API messages to the core-daemon * vcmd - * Command line utility for sending shell commands to nodes + * Command line utility for sending shell commands to nodes ![](static/architecture.png) @@ -45,14 +51,25 @@ filesystem in CORE. CORE combines these namespaces with Linux Ethernet bridging to form networks. Link characteristics are applied using Linux Netem queuing disciplines. -Nftables provides Ethernet frame filtering on Linux bridges. Wireless networks are -emulated by controlling which interfaces can send and receive with nftables +Ebtables is Ethernet frame filtering on Linux bridges. Wireless networks are +emulated by controlling which interfaces can send and receive with ebtables rules. +## Prior Work + +The Tcl/Tk CORE GUI was originally derived from the open source +[IMUNES](http://imunes.net) project from the University of Zagreb as a custom +project within Boeing Research and Technology's Network Technology research +group in 2004. Since then they have developed the CORE framework to use Linux +namespacing, have developed a Python framework, and made numerous user and +kernel-space developments, such as support for wireless networks, IPsec, +distribute emulation, simulation integration, and more. The IMUNES project +also consists of userspace and kernel components. + ## Open Source Project and Resources CORE has been released by Boeing to the open source community under the BSD license. If you find CORE useful for your work, please contribute back to the project. Contributions can be as simple as reporting a bug, dropping a line of -encouragement, or can also include submitting patches or maintaining aspects -of the tool. +encouragement or technical suggestions to the mailing lists, or can also +include submitting patches or maintaining aspects of the tool. diff --git a/docs/configservices.md b/docs/configservices.md deleted file mode 100644 index da81aa48..00000000 --- a/docs/configservices.md +++ /dev/null @@ -1,196 +0,0 @@ -# Config Services - -## Overview - -Config services are a newer version of services for CORE, that leverage a -templating engine, for more robust service file creation. They also -have the power of configuration key/value pairs that values that can be -defined and displayed within the GUI, to help further tweak a service, -as needed. - -CORE services are a convenience for creating reusable dynamic scripts -to run on nodes, for carrying out specific task(s). - -This boilds down to the following functions: - -* generating files the service will use, either directly for commands or for configuration -* command(s) for starting a service -* command(s) for validating a service -* command(s) for stopping a service - -Most CORE nodes will have a default set of services to run, associated with -them. You can however customize the set of services a node will use. Or even -further define a new node type within the GUI, with a set of services, that -will allow quickly dragging and dropping that node type during creation. - -## Available Services - -| Service Group | Services | -|----------------------------------|-----------------------------------------------------------------------| -| [BIRD](services/bird.md) | BGP, OSPF, RADV, RIP, Static | -| [EMANE](services/emane.md) | Transport Service | -| [FRR](services/frr.md) | BABEL, BGP, OSPFv2, OSPFv3, PIMD, RIP, RIPNG, Zebra | -| [NRL](services/nrl.md) | arouted, MGEN Sink, MGEN Actor, NHDP, OLSR, OLSRORG, OLSRv2, SMF | -| [Quagga](services/quagga.md) | BABEL, BGP, OSPFv2, OSPFv3, OSPFv3 MDR, RIP, RIPNG, XPIMD, Zebra | -| [SDN](services/sdn.md) | OVS, RYU | -| [Security](services/security.md) | Firewall, IPsec, NAT, VPN Client, VPN Server | -| [Utility](services/utility.md) | ATD, Routing Utils, DHCP, FTP, IP Forward, PCAP, RADVD, SSF, UCARP | -| [XORP](services/xorp.md) | BGP, OLSR, OSPFv2, OSPFv3, PIMSM4, PIMSM6, RIP, RIPNG, Router Manager | - -## Node Types and Default Services - -Here are the default node types and their services: - -| Node Type | Services | -|-----------|--------------------------------------------------------------------------------------------------------------------------------------------| -| *router* | zebra, OSFPv2, OSPFv3, and IPForward services for IGP link-state routing. | -| *PC* | DefaultRoute service for having a default route when connected directly to a router. | -| *mdr* | zebra, OSPFv3MDR, and IPForward services for wireless-optimized MANET Designated Router routing. | -| *prouter* | a physical router, having the same default services as the *router* node type; for incorporating Linux testbed machines into an emulation. | - -Configuration files can be automatically generated by each service. For -example, CORE automatically generates routing protocol configuration for the -router nodes in order to simplify the creation of virtual networks. - -To change the services associated with a node, double-click on the node to -invoke its configuration dialog and click on the *Services...* button, -or right-click a node a choose *Services...* from the menu. -Services are enabled or disabled by clicking on their names. The button next to -each service name allows you to customize all aspects of this service for this -node. For example, special route redistribution commands could be inserted in -to the Quagga routing configuration associated with the zebra service. - -To change the default services associated with a node type, use the Node Types -dialog available from the *Edit* button at the end of the Layer-3 nodes -toolbar, or choose *Node types...* from the *Session* menu. Note that -any new services selected are not applied to existing nodes if the nodes have -been customized. - -The node types are saved in the GUI config file **~/.coregui/config.yaml**. -Keep this in mind when changing the default services for -existing node types; it may be better to simply create a new node type. It is -recommended that you do not change the default built-in node types. - -## New Services - -Services can save time required to configure nodes, especially if a number -of nodes require similar configuration procedures. New services can be -introduced to automate tasks. - -### Creating New Services - -!!! note - - The directory base name used in **custom_services_dir** below should - be unique and should not correspond to any existing Python module name. - For example, don't use the name **subprocess** or **services**. - -1. Modify the example service shown below - to do what you want. It could generate config/script files, mount per-node - directories, start processes/scripts, etc. Your file can define one or more - classes to be imported. You can create multiple Python files that will be imported. - -2. Put these files in a directory such as **~/.coregui/custom_services**. - -3. Add a **custom_config_services_dir = ~/.coregui/custom_services** entry to the - /etc/core/core.conf file. - -4. Restart the CORE daemon (core-daemon). Any import errors (Python syntax) - should be displayed in the terminal (or service log, like journalctl). - -5. Start using your custom service on your nodes. You can create a new node - type that uses your service, or change the default services for an existing - node type, or change individual nodes. - -### Example Custom Service - -Below is the skeleton for a custom service with some documentation. Most -people would likely only setup the required class variables **(name/group)**. -Then define the **files** to generate and implement the -**get_text_template** function to dynamically create the files wanted. Finally, -the **startup** commands would be supplied, which typically tend to be -running the shell files generated. - -```python -from typing import Dict, List - -from core.config import ConfigString, ConfigBool, Configuration -from core.configservice.base import ConfigService, ConfigServiceMode, ShadowDir - - -# class that subclasses ConfigService -class ExampleService(ConfigService): - # unique name for your service within CORE - name: str = "Example" - # the group your service is associated with, used for display in GUI - group: str = "ExampleGroup" - # directories that the service should shadow mount, hiding the system directory - directories: List[str] = [ - "/usr/local/core", - ] - # files that this service should generate, defaults to nodes home directory - # or can provide an absolute path to a mounted directory - files: List[str] = [ - "example-start.sh", - "/usr/local/core/file1", - ] - # executables that should exist on path, that this service depends on - executables: List[str] = [] - # other services that this service depends on, can be used to define service start order - dependencies: List[str] = [] - # commands to run to start this service - startup: List[str] = [] - # commands to run to validate this service - validate: List[str] = [] - # commands to run to stop this service - shutdown: List[str] = [] - # validation mode, blocking, non-blocking, and timer - validation_mode: ConfigServiceMode = ConfigServiceMode.BLOCKING - # configurable values that this service can use, for file generation - default_configs: List[Configuration] = [ - ConfigString(id="value1", label="Text"), - ConfigBool(id="value2", label="Boolean"), - ConfigString(id="value3", label="Multiple Choice", options=["value1", "value2", "value3"]), - ] - # sets of values to set for the configuration defined above, can be used to - # provide convenient sets of values to typically use - modes: Dict[str, Dict[str, str]] = { - "mode1": {"value1": "value1", "value2": "0", "value3": "value2"}, - "mode2": {"value1": "value2", "value2": "1", "value3": "value3"}, - "mode3": {"value1": "value3", "value2": "0", "value3": "value1"}, - } - # defines directories that this service can help shadow within a node - shadow_directories: List[ShadowDir] = [ - ShadowDir(path="/user/local/core", src="/opt/core") - ] - - def get_text_template(self, name: str) -> str: - return """ - # sample script 1 - # node id(${node.id}) name(${node.name}) - # config: ${config} - echo hello - """ -``` - -#### Validation Mode - -Validation modes are used to determine if a service has started up successfully. - -* blocking - startup commands are expected to run til completion and return 0 exit code -* non-blocking - startup commands are ran, but do not wait for completion -* timer - startup commands are ran, and an arbitrary amount of time is waited to consider started - -#### Shadow Directories - -Shadow directories provide a convenience for copying a directory and the files within -it to a nodes home directory, to allow a unique set of per node files. - -* `ShadowDir(path="/user/local/core")` - copies files at the given location into the node -* `ShadowDir(path="/user/local/core", src="/opt/core")` - copies files to the given location, - but sourced from the provided location -* `ShadowDir(path="/user/local/core", templates=True)` - copies files and treats them as - templates for generation -* `ShadowDir(path="/user/local/core", has_node_paths=True)` - copies files from the given - location, and looks for unique node names directories within it, using a directory named - default, when not preset diff --git a/docs/ctrlnet.md b/docs/ctrlnet.md index d20e3a41..9ecc2e3f 100644 --- a/docs/ctrlnet.md +++ b/docs/ctrlnet.md @@ -1,10 +1,13 @@ # CORE Control Network +* Table of Contents +{:toc} + ## Overview The CORE control network allows the virtual nodes to communicate with their host environment. There are two types: the primary control network and -auxiliary control networks. The primary control network is used mainly for +auxiliary control networks. The primary control network is used mainly for communicating with the virtual nodes from host machines and for master-slave communications in a multi-server distributed environment. Auxiliary control networks have been introduced to for routing namespace hosted emulation @@ -27,19 +30,15 @@ new sessions will use by default. To simultaneously run multiple sessions with control networks, the session option should be used instead of the *core.conf* default. -!!! note +> **NOTE:** If you have a large scenario with more than 253 nodes, use a control +network prefix that allows more than the suggested */24*, such as */23* or +greater. - If you have a large scenario with more than 253 nodes, use a control - network prefix that allows more than the suggested */24*, such as */23* or - greater. - -!!! note - - Running a session with a control network can fail if a previous - session has set up a control network and the its bridge is still up. Close - the previous session first or wait for it to complete. If unable to, the - **core-daemon** may need to be restarted and the lingering bridge(s) removed - manually. +> **NOTE:** Running a session with a control network can fail if a previous +session has set up a control network and the its bridge is still up. Close +the previous session first or wait for it to complete. If unable to, the +*core-daemon* may need to be restarted and the lingering bridge(s) removed +manually. ```shell # Restart the CORE Daemon @@ -53,13 +52,11 @@ for cb in $ctrlbridges; do done ``` -!!! note - - If adjustments to the primary control network configuration made in - **/etc/core/core.conf** do not seem to take affect, check if there is anything - set in the *Session Menu*, the *Options...* dialog. They may need to be - cleared. These per session settings override the defaults in - **/etc/core/core.conf**. +> **NOTE:** If adjustments to the primary control network configuration made in +*/etc/core/core.conf* do not seem to take affect, check if there is anything +set in the *Session Menu*, the *Options...* dialog. They may need to be +cleared. These per session settings override the defaults in +*/etc/core/core.conf*. ## Control Network in Distributed Sessions @@ -105,9 +102,9 @@ argument being the keyword *"shutdown"*. Starting with EMANE 0.9.2, CORE will run EMANE instances within namespaces. Since it is advisable to separate the OTA traffic from other traffic, we will need more than single channel leading out from the namespace. Up to three -auxiliary control networks may be defined. Multiple control networks are set -up in */etc/core/core.conf* file. Lines *controlnet1*, *controlnet2* and -*controlnet3* define the auxiliary networks. +auxiliary control networks may be defined. Multiple control networks are set +up in */etc/core/core.conf* file. Lines *controlnet1*, *controlnet2* and +*controlnet3* define the auxiliary networks. For example, having the following */etc/core/core.conf*: @@ -117,20 +114,18 @@ controlnet1 = core1:172.18.1.0/24 core2:172.18.2.0/24 core3:172.18.3.0/24 controlnet2 = core1:172.19.1.0/24 core2:172.19.2.0/24 core3:172.19.3.0/24 ``` -This will activate the primary and two auxiliary control networks and add +This will activate the primary and two auxiliary control networks and add interfaces *ctrl0*, *ctrl1*, *ctrl2* to each node. One use case would be to assign *ctrl1* to the OTA manager device and *ctrl2* to the Event Service device in the EMANE Options dialog box and leave *ctrl0* for CORE control traffic. -!!! note - - *controlnet0* may be used in place of *controlnet* to configure - the primary control network. +> **NOTE:** *controlnet0* may be used in place of *controlnet* to configure +>the primary control network. Unlike the primary control network, the auxiliary control networks will not -employ tunneling since their primary purpose is for efficiently transporting -multicast EMANE OTA and event traffic. Note that there is no per-session +employ tunneling since their primary purpose is for efficiently transporting +multicast EMANE OTA and event traffic. Note that there is no per-session configuration for auxiliary control networks. To extend the auxiliary control networks across a distributed test @@ -144,11 +139,9 @@ controlnetif2 = eth2 controlnetif3 = eth3 ``` -!!! note - - There is no need to assign an interface to the primary control - network because tunnels are formed between the master and the slaves using IP - addresses that are provided in *servers.conf*. +> **NOTE:** There is no need to assign an interface to the primary control +>network because tunnels are formed between the master and the slaves using IP +>addresses that are provided in *servers.conf*. Shown below is a representative diagram of the configuration above. diff --git a/docs/devguide.md b/docs/devguide.md index 4fa43977..ba34a211 100644 --- a/docs/devguide.md +++ b/docs/devguide.md @@ -1,17 +1,21 @@ # CORE Developer's Guide -## Overview +* Table of Contents +{:toc} -The CORE source consists of several programming languages for +## Repository Overview + +The CORE source consists of several different programming languages for historical reasons. Current development focuses on the Python modules and daemon. Here is a brief description of the source directories. -| Directory | Description | -|-----------|--------------------------------------------------------------------------------------| -| daemon | Python CORE daemon/gui code that handles receiving API calls and creating containers | -| docs | Markdown Documentation currently hosted on GitHub | -| man | Template files for creating man pages for various CORE command line utilities | -| netns | C program for creating CORE containers | +| Directory | Description | +|---|---| +|daemon|Python CORE daemon code that handles receiving API calls and creating containers| +|docs|Markdown Documentation currently hosted on GitHub| +|gui|Tcl/Tk GUI| +|man|Template files for creating man pages for various CORE command line utilities| +|netns|C program for creating CORE containers| ## Getting started @@ -51,7 +55,10 @@ conveniently run tests, etc. # run core-daemon sudo core-daemon -# run gui +# run python gui +core-pygui + +# run tcl gui core-gui # run mocked unit tests @@ -62,7 +69,7 @@ inv test-mock ## Linux Network Namespace Commands Linux network namespace containers are often managed using the *Linux Container Tools* or *lxc-tools* package. -The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these +The lxc-tools website is available here http://lxc.sourceforge.net/ for more information. CORE does not use these management utilities, but includes its own set of tools for instantiating and configuring network namespace containers. This section describes these tools. @@ -97,7 +104,7 @@ vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro A script named *core-cleanup* is provided to clean up any running CORE emulations. It will attempt to kill any remaining vnoded processes, kill any EMANE processes, remove the :file:`/tmp/pycore.*` session directories, and remove -any bridges or *nftables* rules. With a *-d* option, it will also kill any running CORE daemon. +any bridges or *ebtables* rules. With a *-d* option, it will also kill any running CORE daemon. ### netns command @@ -114,5 +121,5 @@ ip link show type bridge # view the netem rules used for applying link effects tc qdisc show # view the rules that make the wireless LAN work -nft list ruleset +ebtables -L ``` diff --git a/docs/diagrams/architecture.plantuml b/docs/diagrams/architecture.plantuml index 403886d9..a43494d5 100644 --- a/docs/diagrams/architecture.plantuml +++ b/docs/diagrams/architecture.plantuml @@ -1,5 +1,5 @@ @startuml -skinparam { +skinparam { RoundCorner 8 ComponentStyle uml2 ComponentBorderColor #Black @@ -9,6 +9,7 @@ skinparam { package User { component "core-gui" as gui #DeepSkyBlue + component "coresendmsg" #DeepSkyBlue component "python scripts" as scripts #DeepSkyBlue component vcmd #DeepSkyBlue } @@ -18,11 +19,11 @@ package Server { package Python { component core #LightSteelBlue } -package "Linux System" { +package "Linux System" { component nodes #SpringGreen [ nodes (linux namespaces) - ] + ] component links #SpringGreen [ links (bridging and traffic manipulation) @@ -30,15 +31,19 @@ package "Linux System" { } package API { + interface TLV as tlv interface gRPC as grpc } -gui <..> grpc +gui <..> tlv +coresendmsg <..> tlv +scripts <..> tlv scripts <..> grpc +tlv -- daemon grpc -- daemon scripts -- core daemon - core core <..> nodes core <..> links vcmd <..> nodes -@enduml +@enduml \ No newline at end of file diff --git a/docs/diagrams/workflow.plantuml b/docs/diagrams/workflow.plantuml index cff943ad..9aa1c04f 100644 --- a/docs/diagrams/workflow.plantuml +++ b/docs/diagrams/workflow.plantuml @@ -1,11 +1,11 @@ @startuml -skinparam { +skinparam { RoundCorner 8 StateBorderColor #Black StateBackgroundColor #LightSteelBlue } -Definition: Session XML +Definition: Session XML/IMN Definition: GUI Drawing Definition: Scripts @@ -37,4 +37,4 @@ Configuration -> Instantiation Instantiation -> Runtime Runtime -> Datacollect Datacollect -> Shutdown -@enduml +@enduml \ No newline at end of file diff --git a/docs/distributed.md b/docs/distributed.md index 95ec7268..ad3d61f8 100644 --- a/docs/distributed.md +++ b/docs/distributed.md @@ -1,5 +1,8 @@ # CORE - Distributed Emulation +* Table of Contents +{:toc} + ## Overview A large emulation scenario can be deployed on multiple emulation servers and @@ -58,7 +61,6 @@ First the distributed servers must be configured to allow passwordless root login over SSH. On distributed server: - ```shelll # install openssh-server sudo apt install openssh-server @@ -79,7 +81,6 @@ sudo systemctl restart sshd ``` On master server: - ```shell # install package if needed sudo apt install openssh-client @@ -98,7 +99,6 @@ connect_kwargs: {"key_filename": "/home/user/.ssh/core"} ``` On distributed server: - ```shell # open sshd config vi /etc/ssh/sshd_config @@ -116,16 +116,15 @@ Make sure the value used below is the absolute path to the file generated above **~/.ssh/core**" Add/update the fabric configuration file **/etc/fabric.yml**: - ```yaml -connect_kwargs: { "key_filename": "/home/user/.ssh/core" } +connect_kwargs: {"key_filename": "/home/user/.ssh/core"} ``` ## Add Emulation Servers in GUI Within the core-gui navigate to menu option: -**Session -> Servers...** +**Session -> Emulation servers...** Within the dialog box presented, add or modify an existing server if present to use the name, address, and port for the a server you plan to use. @@ -133,6 +132,12 @@ to use the name, address, and port for the a server you plan to use. Server configurations are loaded and written to in a configuration file for the GUI. +**~/.core/servers.conf** +```conf +# name address port +server2 192.168.0.2 4038 +``` + ## Assigning Nodes The user needs to assign nodes to emulation servers in the scenario. Making no @@ -167,13 +172,11 @@ will draw the link with a dashed line. Wireless nodes, i.e. those connected to a WLAN node, can be assigned to different emulation servers and participate in the same wireless network only if an EMANE model is used for the WLAN. The basic range model does -not work across multiple servers due to the Linux bridging and nftables +not work across multiple servers due to the Linux bridging and ebtables rules that are used. -!!! note - - The basic range wireless model does not support distributed emulation, - but EMANE does. +**NOTE: The basic range wireless model does not support distributed emulation, +but EMANE does.** When nodes are linked across servers **core-daemons** will automatically create necessary tunnels between the nodes when executed. Care should be taken @@ -184,10 +187,10 @@ These tunnels are created using GRE tunneling, similar to the Tunnel Tool. ## Distributed Checklist 1. Install CORE on master server -2. Install distributed CORE package on all servers needed -3. Installed and configure public-key SSH access on all servers (if you want to use - double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands -4. Update CORE configuration as needed -5. Choose the servers that participate in distributed emulation. -6. Assign nodes to desired servers, empty for master server. -7. Press the **Start** button to launch the distributed emulation. +1. Install distributed CORE package on all servers needed +1. Installed and configure public-key SSH access on all servers (if you want to use +double-click shells or Widgets.) for both the GUI user (for terminals) and root for running CORE commands +1. Update CORE configuration as needed +1. Choose the servers that participate in distributed emulation. +1. Assign nodes to desired servers, empty for master server. +1. Press the **Start** button to launch the distributed emulation. diff --git a/docs/emane.md b/docs/emane.md index a034c63b..f589f834 100644 --- a/docs/emane.md +++ b/docs/emane.md @@ -1,4 +1,7 @@ -# EMANE (Extendable Mobile Ad-hoc Network Emulator) +# CORE/EMANE + +* Table of Contents +{:toc} ## What is EMANE? @@ -28,9 +31,9 @@ and instantiates one EMANE process in the namespace. The EMANE process binds a user space socket to the TAP device for sending and receiving data from CORE. An EMANE instance sends and receives OTA (Over-The-Air) traffic to and from -other EMANE instances via a control port (e.g. *ctrl0*, *ctrl1*). It also +other EMANE instances via a control port (e.g. *ctrl0*, *ctrl1*). It also sends and receives Events to and from the Event Service using the same or a -different control port. EMANE models are configured through the GUI's +different control port. EMANE models are configured through CORE's WLAN configuration dialog. A corresponding EmaneModel Python class is sub-classed for each supported EMANE model, to provide configuration items and their mapping to XML files. This way new models can be easily supported. When @@ -57,17 +60,15 @@ You can find more detailed tutorials and examples at the Every topic below assumes CORE, EMANE, and OSPF MDR have been installed. -!!! info +> **WARNING:** demo files will be found within the new `core-pygui` - Demo files will be found within the `core-gui` **~/.coregui/xmls** directory - -| Topic | Model | Description | -|--------------------------------------|---------|-----------------------------------------------------------| -| [XML Files](emane/files.md) | RF Pipe | Overview of generated XML files used to drive EMANE | -| [GPSD](emane/gpsd.md) | RF Pipe | Overview of running and integrating gpsd with EMANE | -| [Precomputed](emane/precomputed.md) | RF Pipe | Overview of using the precomputed propagation model | -| [EEL](emane/eel.md) | RF Pipe | Overview of using the Emulation Event Log (EEL) Generator | -| [Antenna Profiles](emane/antenna.md) | RF Pipe | Overview of using antenna profiles in EMANE | +|Topic|Model|Description| +|---|---|---| +|[XML Files](emane/files.md)|RF Pipe|Overview of generated XML files used to drive EMANE| +|[GPSD](emane/gpsd.md)|RF Pipe|Overview of running and integrating gpsd with EMANE| +|[Precomputed](emane/precomputed.md)|RF Pipe|Overview of using the precomputed propagation model| +|[EEL](emane/eel.md)|RF Pipe|Overview of using the Emulation Event Log (EEL) Generator| +|[Antenna Profiles](emane/antenna.md)|RF Pipe|Overview of using antenna profiles in EMANE| ## EMANE Configuration @@ -79,7 +80,7 @@ EMANE. An example emane section from the **core.conf** file is shown below: emane_platform_port = 8101 emane_transform_port = 8201 emane_event_monitor = False -#emane_models_dir = /home//.coregui/custom_emane +#emane_models_dir = /home/username/.core/myemane # EMANE log level range [0,4] default: 2 emane_log_level = 2 emane_realtime = True @@ -91,10 +92,8 @@ If you have an EMANE event generator (e.g. mobility or pathloss scripts) and want to have CORE subscribe to EMANE location events, set the following line in the **core.conf** configuration file. -!!! note - - Do not set this option to True if you want to manually drag nodes around - on the canvas to update their location in EMANE. +> **NOTE:** Do not set this option to True if you want to manually drag nodes around +on the canvas to update their location in EMANE. ```shell emane_event_monitor = True @@ -105,7 +104,6 @@ prefix will place the DTD files in **/usr/local/share/emane/dtd** while CORE expects them in **/usr/share/emane/dtd**. Update the EMANE prefix configuration to resolve this problem. - ```shell emane_prefix = /usr/local ``` @@ -118,13 +116,11 @@ placed within the path defined by **emane_models_dir** in the CORE configuration file. This path cannot end in **/emane**. Here is an example model with documentation describing functionality: - ```python """ Example custom emane model. """ -from pathlib import Path -from typing import Dict, Optional, Set, List +from typing import Dict, List, Optional, Set from core.config import Configuration from core.emane import emanemanifest, emanemodel @@ -166,32 +162,14 @@ class ExampleModel(emanemodel.EmaneModel): mac_defaults: Dict[str, str] = { "pcrcurveuri": "/usr/share/emane/xml/models/mac/rfpipe/rfpipepcr.xml" } - mac_config: List[Configuration] = [] + mac_config: List[Configuration] = emanemanifest.parse(mac_xml, mac_defaults) phy_library: Optional[str] = None phy_xml: str = "/usr/share/emane/manifest/emanephy.xml" phy_defaults: Dict[str, str] = { "subid": "1", "propagationmodel": "2ray", "noisemode": "none" } - phy_config: List[Configuration] = [] + phy_config: List[Configuration] = emanemanifest.parse(phy_xml, phy_defaults) config_ignore: Set[str] = set() - - @classmethod - def load(cls, emane_prefix: Path) -> None: - """ - Called after being loaded within the EmaneManager. Provides configured - emane_prefix for parsing xml files. - - :param emane_prefix: configured emane prefix path - :return: nothing - """ - cls._load_platform_config(emane_prefix) - manifest_path = "share/emane/manifest" - # load mac configuration - mac_xml_path = emane_prefix / manifest_path / cls.mac_xml - cls.mac_config = emanemanifest.parse(mac_xml_path, cls.mac_defaults) - # load phy configuration - phy_xml_path = emane_prefix / manifest_path / cls.phy_xml - cls.phy_config = emanemanifest.parse(phy_xml_path, cls.phy_defaults) ``` ## Single PC with EMANE @@ -205,26 +183,32 @@ EMANE. Using the primary control channel prevents your emulation session from sending multicast traffic on your local network and interfering with other EMANE users. -EMANE is configured through an EMANE node. Once a node is linked to an EMANE -cloud, the radio interface on that node may also be configured +EMANE is configured through a WLAN node, because it is all about emulating +wireless radio networks. Once a node is linked to a WLAN cloud configured +with an EMANE model, the radio interface on that node may also be configured separately (apart from the cloud.) -Right click on an EMANE node and select EMANE Config to open the configuration dialog. -The EMANE models should be listed here for selection. (You may need to restart the +Double-click on a WLAN node to invoke the WLAN configuration dialog. Click +the *EMANE* tab; when EMANE has been properly installed, EMANE wireless modules +should be listed in the *EMANE Models* list. (You may need to restart the CORE daemon if it was running prior to installing the EMANE Python bindings.) +Click on a model name to enable it. -When an EMANE model is selected, you can click on the models option button -causing the GUI to query the CORE daemon for configuration items. -Each model will have different parameters, refer to the +When an EMANE model is selected in the *EMANE Models* list, clicking on the +*model options* button causes the GUI to query the CORE daemon for +configuration items. Each model will have different parameters, refer to the EMANE documentation for an explanation of each item. The defaults values are -presented in the dialog. Clicking *Apply* and *Apply* again will store the +presented in the dialog. Clicking *Apply* and *Apply* again will store the EMANE model selections. +The *EMANE options* button allows specifying some global parameters for +EMANE, some of which are necessary for distributed operation. + The RF-PIPE and IEEE 802.11abg models use a Universal PHY that supports geographic location information for determining pathloss between nodes. A default latitude and longitude location is provided by CORE and this location-based pathloss is enabled by default; this is the *pathloss mode* -setting for the Universal PHY. Moving a node on the canvas while the +setting for the Universal PHY. Moving a node on the canvas while the emulation is running generates location events for EMANE. To view or change the geographic location or scale of the canvas use the *Canvas Size and Scale* dialog available from the *Canvas* menu. @@ -239,14 +223,23 @@ used to achieve geo-location accuracy in this situation. Clicking the green *Start* button launches the emulation and causes TAP devices to be created in the virtual nodes that are linked to the EMANE WLAN. These devices appear with interface names such as eth0, eth1, etc. The EMANE processes -should now be running in each namespace. +should now be running in each namespace. For a four node scenario: -To view the configuration generated by CORE, look in the */tmp/pycore.nnnnn/* session -directory to find the generated EMANE xml files. One easy way to view -this information is by double-clicking one of the virtual nodes and listing the files -in the shell. +```shell +ps -aef | grep emane +root 1063 969 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane4.log /tmp/pycore.59992/platform4.xml +root 1117 959 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane2.log /tmp/pycore.59992/platform2.xml +root 1179 942 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane1.log /tmp/pycore.59992/platform1.xml +root 1239 979 0 11:46 ? 00:00:00 emane -d --logl 3 -r -f /tmp/pycore.59992/emane5.log /tmp/pycore.59992/platform5.xml +``` -![](static/emane-single-pc.png) +The example above shows the EMANE processes started by CORE. To view the +configuration generated by CORE, look in the */tmp/pycore.nnnnn/* session +directory for a *platform.xml* file and other XML files. One easy way to view +this information is by double-clicking one of the virtual nodes, and typing +*cd ..* in the shell to go up to the session directory. + +![](static/single-pc-emane.png) ## Distributed EMANE @@ -267,41 +260,55 @@ within a node. **IMPORTANT: If an auxiliary control network is used, an interface on the host has to be assigned to that network.** -Each machine that will act as an emulation server needs to have CORE distributed -and EMANE installed. As well as be setup to work for CORE distributed mode. +Each machine that will act as an emulation server needs to have CORE and EMANE +installed. -The IP addresses of the available servers are configured from the CORE -servers dialog box. The dialog shows available +The IP addresses of the available servers are configured from the CORE emulation +servers dialog box (choose *Session* then *Emulation servers...*). This list of +servers is stored in a *~/.core/servers.conf* file. The dialog shows available servers, some or all of which may be assigned to nodes on the canvas. -Nodes need to be assigned to servers and can be done so using the node -configuration dialog. When a node is not assigned to any emulation server, -it will be emulated locally. +Nodes need to be assigned to emulation servers. Select several nodes, +right-click them, and choose *Assign to* and the name of the desired server. +When a node is not assigned to any emulation server, it will be emulated +locally. The local machine that the GUI connects with is considered the +"master" machine, which in turn connects to the other emulation server +"slaves". Public key SSH should be configured from the master to the slaves. -Using the EMANE node configuration dialog. You can change the EMANE model -being used, along with changing any configuration setting from their defaults. +Under the *EMANE* tab of the EMANE WLAN, click on the *EMANE options* button. +This brings up the emane configuration dialog. The *enable OTA Manager channel* +should be set to *on*. The *OTA Manager device* and *Event Service device* +should be set to a control network device. For example, if you have a primary +and auxiliary control network (i.e. controlnet and controlnet1), and you want +the OTA traffic to have its dedicated network, set the OTA Manager device to +*ctrl1* and the Event Service device to *ctrl0*. The EMANE models can be +configured. Click *Apply* to save these settings. -![](static/emane-configuration.png) +![](static/distributed-emane-configuration.png) -!!! note +> **NOTE:** Here is a quick checklist for distributed emulation with EMANE. - Here is a quick checklist for distributed emulation with EMANE. + 1. Follow the steps outlined for normal CORE. + 2. Under the *EMANE* tab of the EMANE WLAN, click on *EMANE options*. + 3. Turn on the *OTA Manager channel* and set the *OTA Manager device*. + Also set the *Event Service device*. + 4. Select groups of nodes, right-click them, and assign them to servers + using the *Assign to* menu. + 5. Synchronize your machine's clocks prior to starting the emulation, + using *ntp* or *ptp*. Some EMANE models are sensitive to timing. + 6. Press the *Start* button to launch the distributed emulation. -1. Follow the steps outlined for normal CORE. -2. Assign nodes to desired servers -3. Synchronize your machine's clocks prior to starting the emulation, - using *ntp* or *ptp*. Some EMANE models are sensitive to timing. -4. Press the *Start* button to launch the distributed emulation. Now when the Start button is used to instantiate the emulation, the local CORE -daemon will connect to other emulation servers that have been assigned +Python daemon will connect to other emulation servers that have been assigned to nodes. Each server will have its own session directory where the *platform.xml* file and other EMANE XML files are generated. The NEM IDs are -automatically coordinated across servers so there is no overlap. +automatically coordinated across servers so there is no overlap. Each server +also gets its own Platform ID. An Ethernet device is used for disseminating multicast EMANE events, as specified in the *configure emane* dialog. EMANE's Event Service can be run -with mobility or pathloss scripts. +with mobility or pathloss scripts as described in :ref:`Single_PC_with_EMANE`. If CORE is not subscribed to location events, it will generate them as nodes are moved on the canvas. @@ -309,3 +316,5 @@ Double-clicking on a node during runtime will cause the GUI to attempt to SSH to the emulation server for that node and run an interactive shell. The public key SSH configuration should be tested with all emulation servers prior to starting the emulation. + +![](static/distributed-emane-network.png) diff --git a/docs/emane/antenna.md b/docs/emane/antenna.md index 79c023ac..20c98304 100644 --- a/docs/emane/antenna.md +++ b/docs/emane/antenna.md @@ -1,7 +1,8 @@ # EMANE Antenna Profiles +* Table of Contents +{:toc} ## Overview - Introduction to using the EMANE antenna profile in CORE, based on the example EMANE Demo linked below. @@ -9,348 +10,340 @@ EMANE Demo linked below. for more specifics. ## Demo Setup - We will need to create some files in advance of starting this session. Create directory to place antenna profile files. - ```shell mkdir /tmp/emane ``` Create `/tmp/emane/antennaprofile.xml` with the following contents. - ```xml - - - - - - + + + + + + ``` Create `/tmp/emane/antenna30dsector.xml` with the following contents. - ```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ``` Create `/tmp/emane/blockageaft.xml` with the following contents. - ```xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ``` ## Run Demo - 1. Select `Open...` within the GUI 1. Load `emane-demo-antenna.xml` -1. Click ![Start Button](../static/gui/start.png) +1. Click ![Start Button](../static/gui/start.gif) 1. After startup completes, double click n1 to bring up the nodes terminal ## Example Demo - This demo will cover running an EMANE event service to feed in antenna, location, and pathloss events to demonstrate how antenna profiles can be used. ### EMANE Event Dump - On n1 lets dump EMANE events, so when we later run the EMANE event service you can monitor when and what is sent. @@ -359,44 +352,38 @@ root@n1:/tmp/pycore.44917/n1.conf# emaneevent-dump -i ctrl0 ``` ### Send EMANE Events - On the host machine create the following to send EMANE events. -!!! warning - - Make sure to set the `eventservicedevice` to the proper control - network value +> **WARNING:** make sure to set the `eventservicedevice` to the proper control +> network value Create `eventservice.xml` with the following contents. - ```xml - - - + + + ``` Create `eelgenerator.xml` with the following contents. - ```xml - + - - - - + + + + ``` Create `scenario.eel` with the following contents. - ```shell 0.0 nem:1 antennaprofile 1,0.0,0.0 0.0 nem:4 antennaprofile 2,0.0,0.0 @@ -426,25 +413,23 @@ Create `scenario.eel` with the following contents. Run the EMANE event service, monitor what is output on n1 for events dumped and see the link changes within the CORE GUI. - ```shell emaneeventservice -l 3 eventservice.xml ``` ### Stages - The events sent will trigger 4 different states. * State 1 - * n2 and n3 see each other - * n4 and n3 are pointing away + * n2 and n3 see each other + * n4 and n3 are pointing away * State 2 - * n2 and n3 see each other - * n1 and n2 see each other - * n4 and n3 see each other + * n2 and n3 see each other + * n1 and n2 see each other + * n4 and n3 see each other * State 3 - * n2 and n3 see each other - * n4 and n3 are pointing at each other but blocked + * n2 and n3 see each other + * n4 and n3 are pointing at each other but blocked * State 4 - * n2 and n3 see each other - * n4 and n3 see each other + * n2 and n3 see each other + * n4 and n3 see each other diff --git a/docs/emane/eel.md b/docs/emane/eel.md index c2dad86a..0f41c357 100644 --- a/docs/emane/eel.md +++ b/docs/emane/eel.md @@ -1,51 +1,44 @@ # EMANE Emulation Event Log (EEL) Generator +* Table of Contents +{:toc} ## Overview - Introduction to using the EMANE event service and eel files to provide events. [EMANE Demo 1](https://github.com/adjacentlink/emane-tutorial/wiki/Demonstration-1) for more specifics. ## Run Demo - 1. Select `Open...` within the GUI -2. Load `emane-demo-eel.xml` -3. Click ![Start Button](../static/gui/start.png) -4. After startup completes, double click n1 to bring up the nodes terminal +1. Load `emane-demo-eel.xml` +1. Click ![Start Button](../static/gui/start.gif) +1. After startup completes, double click n1 to bring up the nodes terminal ## Example Demo - This demo will go over defining an EMANE event service and eel file to drive an emane event service. ### Viewing Events - On n1 we will use the EMANE event dump utility to listen to events. - ```shell root@n1:/tmp/pycore.46777/n1.conf# emaneevent-dump -i ctrl0 ``` ### Sending Events - On the host machine we will create the following files and start the EMANE event service targeting the control network. -!!! warning - - Make sure to set the `eventservicedevice` to the proper control - network value +> **WARNING:** make sure to set the `eventservicedevice` to the proper control +> network value Create `eventservice.xml` with the following contents. - ```xml - - - + + + ``` @@ -64,23 +57,21 @@ These configuration items tell the EEL Generator which sentences to map to which plugin and whether to issue delta or full updates. Create `eelgenerator.xml` with the following contents. - ```xml - + - - - - + + + + ``` Finally, create `scenario.eel` with the following contents. - ```shell 0.0 nem:1 pathloss nem:2,90.0 0.0 nem:2 pathloss nem:1,90.0 @@ -89,13 +80,11 @@ Finally, create `scenario.eel` with the following contents. ``` Start the EMANE event service using the files created above. - ```shell emaneeventservice eventservice.xml -l 3 ``` ### Sent Events - If we go back to look at our original terminal we will see the events logged out to the terminal. diff --git a/docs/emane/files.md b/docs/emane/files.md index c04b0f6b..62729ac8 100644 --- a/docs/emane/files.md +++ b/docs/emane/files.md @@ -1,7 +1,8 @@ # EMANE XML Files +* Table of Contents +{:toc} ## Overview - Introduction to the XML files generated by CORE used to drive EMANE for a given node. @@ -9,30 +10,27 @@ a given node. may provide more helpful details. ## Run Demo - 1. Select `Open...` within the GUI -2. Load `emane-demo-files.xml` -3. Click ![Start Button](../static/gui/start.png) -4. After startup completes, double click n1 to bring up the nodes terminal +1. Load `emane-demo-files.xml` +1. Click ![Start Button](../static/gui/start.gif) +1. After startup completes, double click n1 to bring up the nodes terminal ## Example Demo - We will take a look at the files generated in the example demo provided. In this case we are running the RF Pipe model. ### Generated Files -| Name | Description | -|-------------------------------------|------------------------------------------------------| -| \-platform.xml | configuration file for the emulator instances | -| \-nem.xml | configuration for creating a NEM | -| \-mac.xml | configuration for defining a NEMs MAC layer | -| \-phy.xml | configuration for defining a NEMs PHY layer | -| \-trans-virtual.xml | configuration when a virtual transport is being used | -| \-trans.xml | configuration when a raw transport is being used | +|Name|Description| +|---|---| +|\-platform.xml|configuration file for the emulator instances| +|\-nem.xml|configuration for creating a NEM| +|\-mac.xml|configuration for defining a NEMs MAC layer| +|\-phy.xml|configuration for defining a NEMs PHY layer| +|\-trans-virtual.xml|configuration when a virtual transport is being used| +|\-trans.xml|configuration when a raw transport is being used| ### Listing File - Below are the files within n1 after starting the demo session. ```shell @@ -43,7 +41,6 @@ eth0-phy.xml n1-emane.log usr.local.etc.quagga var.run.quagga ``` ### Platform XML - The root configuration file used to run EMANE for a node is the platform xml file. In this demo we are looking at `n1-platform.xml`. @@ -81,7 +78,6 @@ root@n1:/tmp/pycore.46777/n1.conf# cat n1-platform.xml ``` ### NEM XML - The nem definition will contain reference to the transport, mac, and phy xml definitions being used for a given nem. @@ -97,7 +93,6 @@ root@n1:/tmp/pycore.46777/n1.conf# cat eth0-nem.xml ``` ### MAC XML - MAC layer configuration settings would be found in this file. CORE will write out all values, even if the value is a default value. @@ -120,7 +115,6 @@ root@n1:/tmp/pycore.46777/n1.conf# cat eth0-mac.xml ``` ### PHY XML - PHY layer configuration settings would be found in this file. CORE will write out all values, even if the value is a default value. @@ -155,7 +149,6 @@ root@n1:/tmp/pycore.46777/n1.conf# cat eth0-phy.xml ``` ### Transport XML - ```shell root@n1:/tmp/pycore.46777/n1.conf# cat eth0-trans-virtual.xml diff --git a/docs/emane/gpsd.md b/docs/emane/gpsd.md index eadf8af2..06c44198 100644 --- a/docs/emane/gpsd.md +++ b/docs/emane/gpsd.md @@ -1,62 +1,54 @@ # EMANE GPSD Integration +* Table of Contents +{:toc} ## Overview - Introduction to integrating gpsd in CORE with EMANE. [EMANE Demo 0](https://github.com/adjacentlink/emane-tutorial/wiki/Demonstration-0) may provide more helpful details. -!!! warning - - Requires installation of [gpsd](https://gpsd.gitlab.io/gpsd/index.html) +> **WARNING:** requires installation of [gpsd](https://gpsd.gitlab.io/gpsd/index.html) ## Run Demo - 1. Select `Open...` within the GUI -2. Load `emane-demo-gpsd.xml` -3. Click ![Start Button](../static/gui/start.png) -4. After startup completes, double click n1 to bring up the nodes terminal +1. Load `emane-demo-gpsd.xml` +1. Click ![Start Button](../static/gui/start.gif) +1. After startup completes, double click n1 to bring up the nodes terminal ## Example Demo - This section will cover how to run a gpsd location agent within EMANE, that will write out locations to a pseudo terminal file. That file can be read in by the gpsd server and make EMANE location events available to gpsd clients. ### EMANE GPSD Event Daemon - First create an `eventdaemon.xml` file on n1 with the following contents. - ```xml - - - + + + ``` Then create the `gpsdlocationagent.xml` file on n1 with the following contents. - ```xml - + ``` Start the EMANE event agent. This will facilitate feeding location events out to a pseudo terminal file defined above. - ```shell emaneeventd eventdaemon.xml -r -d -l 3 -f emaneeventd.log ``` Start gpsd, reading in the pseudo terminal file. - ```shell gpsd -G -n -b $(cat gps.pty) ``` @@ -67,41 +59,36 @@ EEL Events will be played out from the actual host machine over the designated control network interface. Create the following files in the same directory somewhere on your host. -!!! note - - Make sure the below eventservicedevice matches the control network - device being used on the host for EMANE +> **NOTE:** make sure the below eventservicedevice matches the control network +> device being used on the host for EMANE Create `eventservice.xml` on the host machine with the following contents. - ```xml - - - + + + ``` Create `eelgenerator.xml` on the host machine with the following contents. - ```xml - + - - - - + + + + ``` Create `scenario.eel` file with the following contents. - ```shell 0.0 nem:1 location gps 40.031075,-74.523518,3.000000 0.0 nem:2 location gps 40.031165,-74.523412,3.000000 @@ -109,8 +96,7 @@ Create `scenario.eel` file with the following contents. Start the EEL event service, which will send the events defined in the file above over the control network to all EMANE nodes. These location events will be received -and provided to gpsd. This allows gpsd client to connect to and get gps locations. - +and provided to gpsd. This allow gpsd client to connect to and get gps locations. ```shell emaneeventservice eventservice.xml -l 3 ``` diff --git a/docs/emane/precomputed.md b/docs/emane/precomputed.md index 4d0234ae..f8064c97 100644 --- a/docs/emane/precomputed.md +++ b/docs/emane/precomputed.md @@ -1,40 +1,35 @@ # EMANE Procomputed +* Table of Contents +{:toc} ## Overview - Introduction to using the precomputed propagation model. [EMANE Demo 1](https://github.com/adjacentlink/emane-tutorial/wiki/Demonstration-1) for more specifics. ## Run Demo - 1. Select `Open...` within the GUI -2. Load `emane-demo-precomputed.xml` -3. Click ![Start Button](../static/gui/start.png) -4. After startup completes, double click n1 to bring up the nodes terminal +1. Load `emane-demo-precomputed.xml` +1. Click ![Start Button](../static/gui/start.gif) +1. After startup completes, double click n1 to bring up the nodes terminal ## Example Demo - -This demo is using the RF Pipe model with the propagation model set to +This demo is uing the RF Pipe model witht he propagation model set to precomputed. ### Failed Pings - Due to using precomputed and having not sent any pathloss events, the nodes -cannot ping each other yet. +cannot ping eachother yet. Open a terminal on n1. - ```shell root@n1:/tmp/pycore.46777/n1.conf# ping 10.0.0.2 connect: Network is unreachable ``` ### EMANE Shell - You can leverage `emanesh` to investigate why packets are being dropped. - ```shell root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy BroadcastPacketDropTable0 UnicastPacketDropTable0 nem 1 phy BroadcastPacketDropTable0 @@ -48,7 +43,6 @@ nem 1 phy UnicastPacketDropTable0 In the example above we can see that the reason packets are being dropped is due to the propogation model and that is because we have not issued any pathloss events. You can run another command to validate if you have received any pathloss events. - ```shell root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy PathlossEventInfoTable nem 1 phy PathlossEventInfoTable @@ -56,19 +50,15 @@ nem 1 phy PathlossEventInfoTable ``` ### Pathloss Events - On the host we will send pathloss events from all nems to all other nems. -!!! note - - Make sure properly specify the right control network device +> **NOTE:** make sure properly specify the right control network device ```shell emaneevent-pathloss 1:2 90 -i ``` Now if we check for pathloss events on n2 we will see what was just sent above. - ```shell root@n1:/tmp/pycore.46777/n1.conf# emanesh localhost get table nems phy PathlossEventInfoTable nem 1 phy PathlossEventInfoTable @@ -77,7 +67,6 @@ nem 1 phy PathlossEventInfoTable ``` You should also now be able to ping n1 from n2. - ```shell root@n1:/tmp/pycore.46777/n1.conf# ping -c 3 10.0.0.2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. diff --git a/docs/grpc.md b/docs/grpc.md index 3266a57d..998970c5 100644 --- a/docs/grpc.md +++ b/docs/grpc.md @@ -1,6 +1,7 @@ -* Table of Contents +# gRPC API -## Overview +* Table of Contents +{:toc} [gRPC](https://grpc.io/) is a client/server API for interfacing with CORE and used by the python GUI for driving all functionality. It is dependent @@ -8,7 +9,7 @@ on having a running `core-daemon` instance to be leveraged. A python client can be created from the raw generated grpc files included with CORE or one can leverage a provided gRPC client that helps encapsulate -some functionality to try and help make things easier. +some of the functionality to try and help make things easier. ## Python Client @@ -18,7 +19,7 @@ to help provide some conveniences when using the API. ### Client HTTP Proxy -Since gRPC is HTTP2 based, proxy configurations can cause issues. By default, +Since gRPC is HTTP2 based, proxy configurations can cause issues. By default the client disables proxy support to avoid issues when a proxy is present. You can enable and properly account for this issue when needed. @@ -40,29 +41,27 @@ When creating nodes of type `NodeType.DEFAULT` these are the default models and the services they map to. * mdr - * zebra, OSPFv3MDR, IPForward + * zebra, OSPFv3MDR, IPForward * PC - * DefaultRoute + * DefaultRoute * router - * zebra, OSPFv2, OSPFv3, IPForward + * zebra, OSPFv2, OSPFv3, IPForward * host - * DefaultRoute, SSH + * DefaultRoute, SSH ### Interface Helper There is an interface helper class that can be leveraged for convenience when creating interface data for nodes. Alternatively one can manually create -a `core.api.grpc.wrappers.Interface` class instead with appropriate information. - -Manually creating gRPC client interface: +a `core.api.grpc.core_pb2.Interface` class instead with appropriate information. +Manually creating gRPC interface data: ```python -from core.api.grpc.wrappers import Interface - +from core.api.grpc import core_pb2 # id is optional and will set to the next available id # name is optional and will default to eth # mac is optional and will result in a randomly generated mac -iface = Interface( +iface_data = core_pb2.Interface( id=0, name="eth0", ip4="10.0.0.1", @@ -73,7 +72,6 @@ iface = Interface( ``` Leveraging the interface helper class: - ```python from core.api.grpc import client @@ -92,7 +90,6 @@ iface_data = iface_helper.create_iface( Various events that can occur within a session can be listened to. Event types: - * session - events for changes in session state and mobility start/stop/pause * node - events for node movements and icon changes * link - events for link configuration changes and wireless link add/delete @@ -101,26 +98,16 @@ Event types: * file - file events when the legacy gui joins a session ```python -from core.api.grpc import client -from core.api.grpc.wrappers import EventType - +from core.api.grpc import core_pb2 def event_listener(event): print(event) - -# create grpc client and connect -core = client.CoreGrpcClient() -core.connect() - -# add session -session = core.create_session() - # provide no events to listen to all events -core.events(session.id, event_listener) +core.events(session_id, event_listener) # provide events to listen to specific events -core.events(session.id, event_listener, [EventType.NODE]) +core.events(session_id, event_listener, [core_pb2.EventType.NODE]) ``` ### Configuring Links @@ -128,7 +115,6 @@ core.events(session.id, event_listener, [EventType.NODE]) Links can be configured at the time of creation or during runtime. Currently supported configuration options: - * bandwidth (bps) * delay (us) * duplicate (%) @@ -136,48 +122,27 @@ Currently supported configuration options: * loss (%) ```python -from core.api.grpc import client -from core.api.grpc.wrappers import LinkOptions, Position - -# interface helper -iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") - -# create grpc client and connect -core = client.CoreGrpcClient() -core.connect() - -# add session -session = core.create_session() - -# create nodes -position = Position(x=100, y=100) -node1 = session.add_node(1, position=position) -position = Position(x=300, y=100) -node2 = session.add_node(2, position=position) +from core.api.grpc import core_pb2 # configuring when creating a link -options = LinkOptions( +options = core_pb2.LinkOptions( bandwidth=54_000_000, delay=5000, dup=5, loss=5.5, jitter=0, ) -iface1 = iface_helper.create_iface(node1.id, 0) -iface2 = iface_helper.create_iface(node2.id, 0) -link = session.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2) +core.add_link(session_id, n1_id, n2_id, iface1_data, iface2_data, options) # configuring during runtime -link.options.loss = 10.0 -core.edit_link(session.id, link) +core.edit_link(session_id, n1_id, n2_id, iface1_id, iface2_id, options) ``` ### Peer to Peer Example - ```python # required imports from core.api.grpc import client -from core.api.grpc.wrappers import Position +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState # interface helper iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") @@ -186,30 +151,39 @@ iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001 core = client.CoreGrpcClient() core.connect() -# add session -session = core.create_session() +# create session and get id +response = core.create_session() +session_id = response.session_id -# create nodes +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create node one position = Position(x=100, y=100) -node1 = session.add_node(1, position=position) +n1 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two position = Position(x=300, y=100) -node2 = session.add_node(2, position=position) +n2 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n2) +n2_id = response.node_id -# create link -iface1 = iface_helper.create_iface(node1.id, 0) -iface2 = iface_helper.create_iface(node2.id, 0) -session.add_link(node1=node1, node2=node2, iface1=iface1, iface2=iface2) +# links nodes together +iface1 = iface_helper.create_iface(n1_id, 0) +iface2 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n1_id, n2_id, iface1, iface2) -# start session -core.start_session(session) +# change session state +core.set_session_state(session_id, SessionState.INSTANTIATION) ``` ### Switch/Hub Example - ```python # required imports from core.api.grpc import client -from core.api.grpc.wrappers import NodeType, Position +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState # interface helper iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") @@ -218,33 +192,46 @@ iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001 core = client.CoreGrpcClient() core.connect() -# add session -session = core.create_session() +# create session and get id +response = core.create_session() +session_id = response.session_id -# create nodes +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create switch node position = Position(x=200, y=200) -switch = session.add_node(1, _type=NodeType.SWITCH, position=position) +switch = Node(type=NodeType.SWITCH, position=position) +response = core.add_node(session_id, switch) +switch_id = response.node_id + +# create node one position = Position(x=100, y=100) -node1 = session.add_node(2, position=position) +n1 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two position = Position(x=300, y=100) -node2 = session.add_node(3, position=position) +n2 = Node(type=NodeType.DEFAULT, position=position, model="PC") +response = core.add_node(session_id, n2) +n2_id = response.node_id -# create links -iface1 = iface_helper.create_iface(node1.id, 0) -session.add_link(node1=node1, node2=switch, iface1=iface1) -iface1 = iface_helper.create_iface(node2.id, 0) -session.add_link(node1=node2, node2=switch, iface1=iface1) +# links nodes to switch +iface1 = iface_helper.create_iface(n1_id, 0) +core.add_link(session_id, n1_id, switch_id, iface1) +iface1 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n2_id, switch_id, iface1) -# start session -core.start_session(session) +# change session state +core.set_session_state(session_id, SessionState.INSTANTIATION) ``` ### WLAN Example - ```python # required imports from core.api.grpc import client -from core.api.grpc.wrappers import NodeType, Position +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState # interface helper iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") @@ -253,37 +240,49 @@ iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001 core = client.CoreGrpcClient() core.connect() -# add session -session = core.create_session() +# create session and get id +response = core.create_session() +session_id = response.session_id -# create nodes +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create wlan node position = Position(x=200, y=200) -wlan = session.add_node(1, _type=NodeType.WIRELESS_LAN, position=position) +wlan = Node(type=NodeType.WIRELESS_LAN, position=position) +response = core.add_node(session_id, wlan) +wlan_id = response.node_id + +# create node one position = Position(x=100, y=100) -node1 = session.add_node(2, model="mdr", position=position) +n1 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two position = Position(x=300, y=100) -node2 = session.add_node(3, model="mdr", position=position) +n2 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n2) +n2_id = response.node_id -# create links -iface1 = iface_helper.create_iface(node1.id, 0) -session.add_link(node1=node1, node2=wlan, iface1=iface1) -iface1 = iface_helper.create_iface(node2.id, 0) -session.add_link(node1=node2, node2=wlan, iface1=iface1) - -# set wlan config using a dict mapping currently +# configure wlan using a dict mapping currently # support values as strings -wlan.set_wlan( - { - "range": "280", - "bandwidth": "55000000", - "delay": "6000", - "jitter": "5", - "error": "5", - } -) +core.set_wlan_config(session_id, wlan_id, { + "range": "280", + "bandwidth": "55000000", + "delay": "6000", + "jitter": "5", + "error": "5", +}) -# start session -core.start_session(session) +# links nodes to wlan +iface1 = iface_helper.create_iface(n1_id, 0) +core.add_link(session_id, n1_id, wlan_id, iface1) +iface1 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n2_id, wlan_id, iface1) + +# change session state +core.set_session_state(session_id, SessionState.INSTANTIATION) ``` ### EMANE Example @@ -292,7 +291,6 @@ For EMANE you can import and use one of the existing models and use its name for configuration. Current models: - * core.emane.ieee80211abg.EmaneIeee80211abgModel * core.emane.rfpipe.EmaneRfPipeModel * core.emane.tdma.EmaneTdmaModel @@ -309,8 +307,8 @@ will use the defaults. When no configuration is used, the defaults are used. ```python # required imports from core.api.grpc import client -from core.api.grpc.wrappers import NodeType, Position -from core.emane.models.ieee80211abg import EmaneIeee80211abgModel +from core.api.grpc.core_pb2 import Node, NodeType, Position, SessionState +from core.emane.ieee80211abg import EmaneIeee80211abgModel # interface helper iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001::/64") @@ -319,47 +317,68 @@ iface_helper = client.InterfaceHelper(ip4_prefix="10.0.0.0/24", ip6_prefix="2001 core = client.CoreGrpcClient() core.connect() -# add session -session = core.create_session() +# create session and get id +response = core.create_session() +session_id = response.session_id -# create nodes +# change session state to configuration so that nodes get started when added +core.set_session_state(session_id, SessionState.CONFIGURATION) + +# create emane node position = Position(x=200, y=200) -emane = session.add_node( - 1, _type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name -) +emane = Node(type=NodeType.EMANE, position=position, emane=EmaneIeee80211abgModel.name) +response = core.add_node(session_id, emane) +emane_id = response.node_id + +# create node one position = Position(x=100, y=100) -node1 = session.add_node(2, model="mdr", position=position) +n1 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n1) +n1_id = response.node_id + +# create node two position = Position(x=300, y=100) -node2 = session.add_node(3, model="mdr", position=position) +n2 = Node(type=NodeType.DEFAULT, position=position, model="mdr") +response = core.add_node(session_id, n2) +n2_id = response.node_id -# create links -iface1 = iface_helper.create_iface(node1.id, 0) -session.add_link(node1=node1, node2=emane, iface1=iface1) -iface1 = iface_helper.create_iface(node2.id, 0) -session.add_link(node1=node2, node2=emane, iface1=iface1) +# configure general emane settings +core.set_emane_config(session_id, { + "eventservicettl": "2" +}) -# setting emane specific emane model configuration -emane.set_emane_model(EmaneIeee80211abgModel.name, { - "eventservicettl": "2", +# configure emane model settings +# using a dict mapping currently support values as strings +core.set_emane_model_config(session_id, emane_id, EmaneIeee80211abgModel.name, { "unicastrate": "3", }) -# start session -core.start_session(session) +# links nodes to emane +iface1 = iface_helper.create_iface(n1_id, 0) +core.add_link(session_id, n1_id, emane_id, iface1) +iface1 = iface_helper.create_iface(n2_id, 0) +core.add_link(session_id, n2_id, emane_id, iface1) + +# change session state +core.set_session_state(session_id, SessionState.INSTANTIATION) ``` EMANE Model Configuration: - ```python -# emane network specific config, set on an emane node -# this setting applies to all nodes connected -emane.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"}) +# emane network specific config +core.set_emane_model_config(session_id, emane_id, EmaneIeee80211abgModel.name, { + "unicastrate": "3", +}) -# node specific config for an individual node connected to an emane network -node.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"}) +# node specific config +core.set_emane_model_config(session_id, node_id, EmaneIeee80211abgModel.name, { + "unicastrate": "3", +}) -# node interface specific config for an individual node connected to an emane network -node.set_emane_model(EmaneIeee80211abgModel.name, {"unicastrate": "3"}, iface_id=0) +# node interface specific config +core.set_emane_model_config(session_id, node_id, EmaneIeee80211abgModel.name, { + "unicastrate": "3", +}, iface_id) ``` ## Configuring a Service @@ -370,7 +389,6 @@ Configuring the files of a service results in a specific hard coded script being generated, instead of the default scripts, that may leverage dynamic generation. The following features can be configured for a service: - * files - files that will be generated * directories - directories that will be mounted unique to the node * startup - commands to run start a service @@ -378,11 +396,13 @@ The following features can be configured for a service: * shutdown - commands to run to stop a service Editing service properties: - ```python # configure a service, for a node, for a given session -node.service_configs[service_name] = NodeServiceData( - configs=["file1.sh", "file2.sh"], +core.set_node_service( + session_id, + node_id, + service_name, + files=["file1.sh", "file2.sh"], directories=["/etc/node"], startup=["bash file1.sh"], validate=[], @@ -394,18 +414,22 @@ When editing a service file, it must be the name of `config` file that the service will generate. Editing a service file: - ```python # to edit the contents of a generated file you can specify # the service, the file name, and its contents -file_configs = node.service_file_configs.setdefault(service_name, {}) -file_configs[file_name] = "echo hello world" +core.set_node_service_file( + session_id, + node_id, + service_name, + file_name, + "echo hello", +) ``` ## File Examples File versions of the network examples can be found -[here](https://github.com/coreemu/core/tree/master/package/examples/grpc). +[here](https://github.com/coreemu/core/tree/master/daemon/examples/grpc). These examples will create a session using the gRPC API when the core-daemon is running. You can then switch to and attach to these sessions using either of the CORE GUIs. diff --git a/docs/gui.md b/docs/gui.md index c296ac18..85bbb6cd 100644 --- a/docs/gui.md +++ b/docs/gui.md @@ -1,6 +1,12 @@ -# CORE GUI -![](static/core-gui.png) +# Using the CORE GUI + +* Table of Contents +{:toc} + +The following image shows the CORE GUI: +![](static/core_screenshot.png) + ## Overview @@ -8,7 +14,7 @@ The GUI is used to draw nodes and network devices on a canvas, linking them together to create an emulated network session. After pressing the start button, CORE will proceed through these phases, -staying in the **runtime** phase. After the session is stopped, CORE will +staying in the **runtime** phase. After the session is stopped, CORE will proceed to the **data collection** phase before tearing down the emulated state. @@ -18,58 +24,49 @@ when these session states are reached. ## Prerequisites -Beyond installing CORE, you must have the CORE daemon running. This is done +Beyond installing CORE, you must have the CORE daemon running. This is done on the command line with either systemd or sysv. ```shell -# systemd service +# systemd sudo systemctl daemon-reload sudo systemctl start core-daemon +# sysv +sudo service core-daemon start +``` + +You can also invoke the daemon directly from the command line, which can be +useful if you'd like to see the logging output directly. + +```shell # direct invocation sudo core-daemon ``` -## GUI Files - -The GUI will create a directory in your home directory on first run called -~/.coregui. This directory will help layout various files that the GUI may use. - -* .coregui/ - * backgrounds/ - * place backgrounds used for display in the GUI - * custom_emane/ - * place to keep custom emane models to use with the core-daemon - * custom_services/ - * place to keep custom services to use with the core-daemon - * icons/ - * icons the GUI uses along with customs icons desired - * mobility/ - * place to keep custom mobility files - * scripts/ - * place to keep core related scripts - * xmls/ - * place to keep saved session xml files - * gui.log - * log file when running the gui, look here when issues occur for exceptions etc - * config.yaml - * configuration file used to save/load various gui related settings (custom nodes, layouts, addresses, etc) - ## Modes of Operation The CORE GUI has two primary modes of operation, **Edit** and **Execute** modes. Running the GUI, by typing **core-gui** with no options, starts in -Edit mode. Nodes are drawn on a blank canvas using the toolbar on the left +Edit mode. Nodes are drawn on a blank canvas using the toolbar on the left and configured from right-click menus or by double-clicking them. The GUI does not need to be run as root. -Once editing is complete, pressing the green **Start** button instantiates -the topology and enters Execute mode. In execute mode, -the user can interact with the running emulated machines by double-clicking or -right-clicking on them. The editing toolbar disappears and is replaced by an -execute toolbar, which provides tools while running the emulation. Pressing -the red **Stop** button will destroy the running emulation and return CORE -to Edit mode. +Once editing is complete, pressing the green **Start** button (or choosing +**Execute** from the **Session** menu) instantiates the topology within the +Linux kernel and enters Execute mode. In execute mode, the user can interact +with the running emulated machines by double-clicking or right-clicking on +them. The editing toolbar disappears and is replaced by an execute toolbar, +which provides tools while running the emulation. Pressing the red **Stop** +button (or choosing **Terminate** from the **Session** menu) will destroy +the running emulation and return CORE to Edit mode. + +CORE can be started directly in Execute mode by specifying **--start** and a +topology file on the command line: + +```shell +core-gui --start ~/.core/configs/myfile.imn +``` Once the emulation is running, the GUI can be closed, and a prompt will appear asking if the emulation should be terminated. The emulation may be left @@ -77,22 +74,11 @@ running and the GUI can reconnect to an existing session at a later time. The GUI can be run as a normal user on Linux. -The GUI currently provides the following options on startup. +The GUI can be connected to a different address or TCP port using the +**--address** and/or **--port** options. The defaults are shown below. ```shell -usage: core-gui [-h] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-p] - [-s SESSION] [--create-dir] - -CORE Python GUI - -optional arguments: - -h, --help show this help message and exit - -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --level {DEBUG,INFO,WARNING,ERROR,CRITICAL} - logging level - -p, --proxy enable proxy - -s SESSION, --session SESSION - session id to join - --create-dir create gui directory and exit +core-gui --address 127.0.0.1 --port 4038 ``` ## Toolbar @@ -107,45 +93,46 @@ the left side of the CORE window. Below are brief descriptions for each toolbar item, starting from the top. Most of the tools are grouped into related sub-menus, which appear when you click on their group icon. -| Icon | Name | Description | -|----------------------------|----------------|----------------------------------------------------------------------------------------| -| ![](static/gui/select.png) | Selection Tool | Tool for selecting, moving, configuring nodes. | -| ![](static/gui/start.png) | Start Button | Starts Execute mode, instantiates the emulation. | -| ![](static/gui/link.png) | Link | Allows network links to be drawn between two nodes by clicking and dragging the mouse. | +| Icon | Name | Description | +|---|---|---| +| ![](static/gui/select.gif) | Selection Tool | Tool for selecting, moving, configuring nodes. | +| ![](static/gui/start.gif) | Start Button | Starts Execute mode, instantiates the emulation. | +| ![](static/gui/link.gif) | Link | Allows network links to be drawn between two nodes by clicking and dragging the mouse. | ### CORE Nodes These nodes will create a new node container and run associated services. -| Icon | Name | Description | -|----------------------------|---------|------------------------------------------------------------------------------| -| ![](static/gui/router.png) | Router | Runs Quagga OSPFv2 and OSPFv3 routing to forward packets. | -| ![](static/gui/host.png) | Host | Emulated server machine having a default route, runs SSH server. | -| ![](static/gui/pc.png) | PC | Basic emulated machine having a default route, runs no processes by default. | -| ![](static/gui/mdr.png) | MDR | Runs Quagga OSPFv3 MDR routing for MANET-optimized routing. | -| ![](static/gui/router.png) | PRouter | Physical router represents a real testbed machine. | +| Icon | Name | Description | +|---|---|---| +| ![](static/gui/router.gif) | Router | Runs Quagga OSPFv2 and OSPFv3 routing to forward packets. | +| ![](static/gui/host.gif) | Host | Emulated server machine having a default route, runs SSH server. | +| ![](static/gui/pc.gif) | PC | Basic emulated machine having a default route, runs no processes by default. | +| ![](static/gui/mdr.gif) | MDR | Runs Quagga OSPFv3 MDR routing for MANET-optimized routing. | +| ![](static/gui/router_green.gif) | PRouter | Physical router represents a real testbed machine. | +| ![](static/gui/document-properties.gif) | Edit | Bring up the custom node dialog. | ### Network Nodes These nodes are mostly used to create a Linux bridge that serves the purpose described below. -| Icon | Name | Description | -|-------------------------------|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ![](static/gui/hub.png) | Hub | Ethernet hub forwards incoming packets to every connected node. | -| ![](static/gui/lanswitch.png) | Switch | Ethernet switch intelligently forwards incoming packets to attached hosts using an Ethernet address hash table. | -| ![](static/gui/wlan.png) | Wireless LAN | When routers are connected to this WLAN node, they join a wireless network and an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between attached wireless nodes based on the distance between them. | -| ![](static/gui/rj45.png) | RJ45 | RJ45 Physical Interface Tool, emulated nodes can be linked to real physical interfaces; using this tool, real networks and devices can be physically connected to the live-running emulation. | -| ![](static/gui/tunnel.png) | Tunnel | Tool allows connecting together more than one CORE emulation using GRE tunnels. | +| Icon | Name | Description | +|---|---|---| +| ![](static/gui/hub.gif) | Hub | Ethernet hub forwards incoming packets to every connected node. | +| ![](static/gui/lanswitch.gif) | Switch | Ethernet switch intelligently forwards incoming packets to attached hosts using an Ethernet address hash table. | +| ![](static/gui/wlan.gif) | Wireless LAN | When routers are connected to this WLAN node, they join a wireless network and an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between attached wireless nodes based on the distance between them. | +| ![](static/gui/rj45.gif) | RJ45 | RJ45 Physical Interface Tool, emulated nodes can be linked to real physical interfaces; using this tool, real networks and devices can be physically connected to the live-running emulation. | +| ![](static/gui/tunnel.gif) | Tunnel | Tool allows connecting together more than one CORE emulation using GRE tunnels. | ### Annotation Tools -| Icon | Name | Description | -|-------------------------------|-----------|---------------------------------------------------------------------| -| ![](static/gui/marker.png) | Marker | For drawing marks on the canvas. | -| ![](static/gui/oval.png) | Oval | For drawing circles on the canvas that appear in the background. | -| ![](static/gui/rectangle.png) | Rectangle | For drawing rectangles on the canvas that appear in the background. | -| ![](static/gui/text.png) | Text | For placing text captions on the canvas. | +| Icon | Name | Description | +|---|---|---| +| ![](static/gui/marker.gif) | Marker | For drawing marks on the canvas. | +| ![](static/gui/oval.gif) | Oval | For drawing circles on the canvas that appear in the background. | +| ![](static/gui/rectangle.gif) | Rectangle | For drawing rectangles on the canvas that appear in the background. | +| ![](static/gui/text.gif) | Text | For placing text captions on the canvas. | ### Execution Toolbar @@ -153,12 +140,14 @@ When the Start button is pressed, CORE switches to Execute mode, and the Edit toolbar on the left of the CORE window is replaced with the Execution toolbar Below are the items on this toolbar, starting from the top. -| Icon | Name | Description | -|----------------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ![](static/gui/stop.png) | Stop Button | Stops Execute mode, terminates the emulation, returns CORE to edit mode. | -| ![](static/gui/select.png) | Selection Tool | In Execute mode, the Selection Tool can be used for moving nodes around the canvas, and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up menu of run-time options for that node. | -| ![](static/gui/marker.png) | Marker | For drawing freehand lines on the canvas, useful during demonstrations; markings are not saved. | -| ![](static/gui/run.png) | Run Tool | This tool allows easily running a command on all or a subset of all nodes. A list box allows selecting any of the nodes. A text entry box allows entering any command. The command should return immediately, otherwise the display will block awaiting response. The *ping* command, for example, with no parameters, is not a good idea. The result of each command is displayed in a results box. The first occurrence of the special text "NODE" will be replaced with the node name. The command will not be attempted to run on nodes that are not routers, PCs, or hosts, even if they are selected. | +| Icon | Name | Description | +|---|---|---| +| ![](static/gui/select.gif) | Selection Tool | In Execute mode, the Selection Tool can be used for moving nodes around the canvas, and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up menu of run-time options for that node. | +| ![](static/gui/stop.gif) | Stop Button | Stops Execute mode, terminates the emulation, returns CORE to edit mode. | +| ![](static/gui/observe.gif) | Observer Widgets Tool | Clicking on this magnifying glass icon invokes a menu for easily selecting an Observer Widget. The icon has a darker gray background when an Observer Widget is active, during which time moving the mouse over a node will pop up an information display for that node. | +| ![](static/gui/marker.gif) | Marker | For drawing freehand lines on the canvas, useful during demonstrations; markings are not saved. | +| ![](static/gui/twonode.gif) | Two-node Tool | Click to choose a starting and ending node, and run a one-time *traceroute* between those nodes or a continuous *ping -R* between nodes. The output is displayed in real time in a results box, while the IP addresses are parsed and the complete network path is highlighted on the CORE display. | +| ![](static/gui/run.gif) | Run Tool | This tool allows easily running a command on all or a subset of all nodes. A list box allows selecting any of the nodes. A text entry box allows entering any command. The command should return immediately, otherwise the display will block awaiting response. The *ping* command, for example, with no parameters, is not a good idea. The result of each command is displayed in a results box. The first occurrence of the special text "NODE" will be replaced with the node name. The command will not be attempted to run on nodes that are not routers, PCs, or hosts, even if they are selected. | ## Menu @@ -168,61 +157,98 @@ menu, by clicking the dashed line at the top. ### File Menu -The File menu contains options for saving and opening saved sessions. +The File menu contains options for manipulating the **.imn** Configuration +Files. Generally, these menu items should not be used in Execute mode. -| Option | Description | -|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| New Session | This starts a new session with an empty canvas. | -| Save | Saves the current topology. If you have not yet specified a file name, the Save As dialog box is invoked. | -| Save As | Invokes the Save As dialog box for selecting a new **.xml** file for saving the current configuration in the XML file. | -| Open | Invokes the File Open dialog box for selecting a new XML file to open. | -| Recently used files | Above the Quit menu command is a list of recently use files, if any have been opened. You can clear this list in the Preferences dialog box. You can specify the number of files to keep in this list from the Preferences dialog. Click on one of the file names listed to open that configuration file. | -| Execute Python Script | Invokes a File Open dialog box for selecting a Python script to run and automatically connect to. After a selection is made, a Python Script Options dialog box is invoked to allow for command-line options to be added. The Python script must create a new CORE Session and add this session to the daemon's list of sessions in order for this to work. | -| Quit | The Quit command should be used to exit the CORE GUI. CORE may prompt for termination if you are currently in Execute mode. Preferences and the recently-used files list are saved. | +| Option | Description | +|---|---| +| New | This starts a new file with an empty canvas. | +| Open | Invokes the File Open dialog box for selecting a new **.imn** or XML file to open. You can change the default path used for this dialog in the Preferences Dialog. | +| Save | Saves the current topology. If you have not yet specified a file name, the Save As dialog box is invoked. | +| Save As XML | Invokes the Save As dialog box for selecting a new **.xml** file for saving the current configuration in the XML file. | +| Save As imn | Invokes the Save As dialog box for selecting a new **.imn** topology file for saving the current configuration. Files are saved in the *IMUNES network configuration* file. | +| Export Python script | Prints Python snippets to the console, for inclusion in a CORE Python script. | +| Execute XML or Python script | Invokes a File Open dialog box for selecting an XML file to run or a Python script to run and automatically connect to. If a Python script, the script must create a new CORE Session and add this session to the daemon's list of sessions in order for this to work. | +| Execute Python script with options | Invokes a File Open dialog box for selecting a Python script to run and automatically connect to. After a selection is made, a Python Script Options dialog box is invoked to allow for command-line options to be added. The Python script must create a new CORE Session and add this session to the daemon's list of sessions in order for this to work. | +| Open current file in editor | This opens the current topology file in the **vim** text editor. First you need to save the file. Once the file has been edited with a text editor, you will need to reload the file to see your changes. The text editor can be changed from the Preferences Dialog. | +| Print | This uses the Tcl/Tk postscript command to print the current canvas to a printer. A dialog is invoked where you can specify a printing command, the default being **lpr**. The postscript output is piped to the print command. | +| Save screenshot | Saves the current canvas as a postscript graphic file. | +| Recently used files | Above the Quit menu command is a list of recently use files, if any have been opened. You can clear this list in the Preferences dialog box. You can specify the number of files to keep in this list from the Preferences dialog. Click on one of the file names listed to open that configuration file. | +| Quit | The Quit command should be used to exit the CORE GUI. CORE may prompt for termination if you are currently in Execute mode. Preferences and the recently-used files list are saved. | ### Edit Menu -| Option | Description | -|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Preferences | Invokes the Preferences dialog box. | -| Custom Nodes | Custom node creation dialog box. | -| Undo | (Disabled) Attempts to undo the last edit in edit mode. | -| Redo | (Disabled) Attempts to redo an edit that has been undone. | -| Cut, Copy, Paste, Delete | Used to cut, copy, paste, and delete a selection. When nodes are pasted, their node numbers are automatically incremented, and existing links are preserved with new IP addresses assigned. Services and their customizations are copied to the new node, but care should be taken as node IP addresses have changed with possibly old addresses remaining in any custom service configurations. Annotations may also be copied and pasted. | +| Option | Description | +|---|---| +| Undo | Attempts to undo the last edit in edit mode. | +| Redo | Attempts to redo an edit that has been undone. | +| Cut, Copy, Paste | Used to cut, copy, and paste a selection. When nodes are pasted, their node numbers are automatically incremented, and existing links are preserved with new IP addresses assigned. Services and their customizations are copied to the new node, but care should be taken as node IP addresses have changed with possibly old addresses remaining in any custom service configurations. Annotations may also be copied and pasted. +| Select All | Selects all items on the canvas. Selected items can be moved as a group. | +| Select Adjacent | Select all nodes that are linked to the already selected node(s). For wireless nodes this simply selects the WLAN node(s) that the wireless node belongs to. You can use this by clicking on a node and pressing CTRL+N to select the adjacent nodes. | +| Find... | Invokes the *Find* dialog box. The Find dialog can be used to search for nodes by name or number. Results are listed in a table that includes the node or link location and details such as IP addresses or link parameters. Clicking on a result will focus the canvas on that node or link, switching canvases if necessary. | +| Clear marker | Clears any annotations drawn with the marker tool. Also clears any markings used to indicate a node's status. | +| Preferences... | Invokes the Preferences dialog box. | ### Canvas Menu -The canvas menu provides commands related to the editing canvas. +The canvas menu provides commands for adding, removing, changing, and switching +to different editing canvases. -| Option | Description | -|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Size/scale | Invokes a Canvas Size and Scale dialog that allows configuring the canvas size, scale, and geographic reference point. The size controls allow changing the width and height of the current canvas, in pixels or meters. The scale allows specifying how many meters are equivalent to 100 pixels. The reference point controls specify the latitude, longitude, and altitude reference point used to convert between geographic and Cartesian coordinate systems. By clicking the *Save as default* option, all new canvases will be created with these properties. The default canvas size can also be changed in the Preferences dialog box. | -| Wallpaper | Used for setting the canvas background image. | +| Option | Description | +|---|---| +| New | Creates a new empty canvas at the right of all existing canvases. | +| Manage... | Invokes the *Manage Canvases* dialog box, where canvases may be renamed and reordered, and you can easily switch to one of the canvases by selecting it. | +| Delete | Deletes the current canvas and all items that it contains. | +| Size/scale... | Invokes a Canvas Size and Scale dialog that allows configuring the canvas size, scale, and geographic reference point. The size controls allow changing the width and height of the current canvas, in pixels or meters. The scale allows specifying how many meters are equivalent to 100 pixels. The reference point controls specify the latitude, longitude, and altitude reference point used to convert between geographic and Cartesian coordinate systems. By clicking the *Save as default* option, all new canvases will be created with these properties. The default canvas size can also be changed in the Preferences dialog box. +| Wallpaper... | Used for setting the canvas background image. | +| Previous, Next, First, Last | Used for switching the active canvas to the first, last, or adjacent canvas. | ### View Menu -The View menu features items for toggling on and off their display on the canvas. +The View menu features items for controlling what is displayed on the drawing +canvas. -| Option | Description | -|-----------------|-----------------------------------| -| Interface Names | Display interface names on links. | -| IPv4 Addresses | Display IPv4 addresses on links. | -| IPv6 Addresses | Display IPv6 addresses on links. | -| Node Labels | Display node names. | -| Link Labels | Display link labels. | -| Annotations | Display annotations. | -| Canvas Grid | Display the canvas grid. | +| Option | Description | +|---|---| +| Show | Opens a submenu of items that can be displayed or hidden, such as interface names, addresses, and labels. Use these options to help declutter the display. These options are generally saved in the topology files, so scenarios have a more consistent look when copied from one computer to another. | +| Show hidden nodes | Reveal nodes that have been hidden. Nodes are hidden by selecting one or more nodes, right-clicking one and choosing *hide*. | +| Locked | Toggles locked view; when the view is locked, nodes cannot be moved around on the canvas with the mouse. This could be useful when sharing the topology with someone and you do not expect them to change things. | +| 3D GUI... | Launches a 3D GUI by running the command defined under Preferences, *3D GUI command*. This is typically a script that runs the SDT3D display. SDT is the Scripted Display Tool from NRL that is based on NASA's Java-based WorldWind virtual globe software. | +| Zoom In | Magnifies the display. You can also zoom in by clicking *zoom 100%* label in the status bar, or by pressing the **+** (plus) key. | +| Zoom Out | Reduces the size of the display. You can also zoom out by right-clicking *zoom 100%* label in the status bar or by pressing the **-** (minus) key. | ### Tools Menu The tools menu lists different utility functions. -| Option | Description | -|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Find | Display find dialog used for highlighting a node on the canvas. | -| Auto Grid | Automatically layout nodes in a grid. | -| IP addresses | Invokes the IP Addresses dialog box for configuring which IPv4/IPv6 prefixes are used when automatically addressing new interfaces. | -| MAC addresses | Invokes the MAC Addresses dialog box for configuring the starting number used as the lowest byte when generating each interface MAC address. This value should be changed when tunneling between CORE emulations to prevent MAC address conflicts. | +| Option | Description | +|---|---| +| Autorearrange all | Automatically arranges all nodes on the canvas. Nodes having a greater number of links are moved to the center. This mode can continue to run while placing nodes. To turn off this autorearrange mode, click on a blank area of the canvas with the select tool, or choose this menu option again. | +| Autorearrange selected | Automatically arranges the selected nodes on the canvas. | +| Align to grid | Moves nodes into a grid formation, starting with the smallest-numbered node in the upper-left corner of the canvas, arranging nodes in vertical columns. | +| Traffic... | Invokes the CORE Traffic Flows dialog box, which allows configuring, starting, and stopping MGEN traffic flows for the emulation. | +| IP addresses... | Invokes the IP Addresses dialog box for configuring which IPv4/IPv6 prefixes are used when automatically addressing new interfaces. | +| MAC addresses... | Invokes the MAC Addresses dialog box for configuring the starting number used as the lowest byte when generating each interface MAC address. This value should be changed when tunneling between CORE emulations to prevent MAC address conflicts. | +| Build hosts file... | Invokes the Build hosts File dialog box for generating **/etc/hosts** file entries based on IP addresses used in the emulation. | +| Renumber nodes... | Invokes the Renumber Nodes dialog box, which allows swapping one node number with another in a few clicks. | +| Experimental... | Menu of experimental options, such as a tool to convert ns-2 scripts to IMUNES imn topologies, supporting only basic ns-2 functionality, and a tool for automatically dividing up a topology into partitions. | +| Topology generator | Opens a submenu of topologies to generate. You can first select the type of node that the topology should consist of, or routers will be chosen by default. Nodes may be randomly placed, aligned in grids, or various other topology patterns. All of the supported patterns are listed in the table below. | +| Debugger... | Opens the CORE Debugger window for executing arbitrary Tcl/Tk commands. | + +#### Topology Generator + +| Pattern | Description | +|---|---| +| Random | Nodes are randomly placed about the canvas, but are not linked together. This can be used in conjunction with a WLAN node to quickly create a wireless network. | +| Grid | Nodes are placed in horizontal rows starting in the upper-left corner, evenly spaced to the right; nodes are not linked to each other. | +| Connected Grid | Nodes are placed in an N x M (width and height) rectangular grid, and each node is linked to the node above, below, left and right of itself. | +| Chain | Nodes are linked together one after the other in a chain. | +| Star | One node is placed in the center with N nodes surrounding it in a circular pattern, with each node linked to the center node. | +| Cycle | Nodes are arranged in a circular pattern with every node connected to its neighbor to form a closed circular path. | +| Wheel | The wheel pattern links nodes in a combination of both Star and Cycle patterns. | +| Cube | Generate a cube graph of nodes. | +| Clique | Creates a clique graph of nodes, where every node is connected to every other node. | +| Bipartite | Creates a bipartite graph of nodes, having two disjoint sets of vertices. | ### Widgets Menu @@ -249,29 +275,30 @@ Here are some standard widgets: Only half of the line is drawn because each router may be in a different adjacency state with respect to the other. * **Throughput** - displays the kilobits-per-second throughput above each link, - using statistics gathered from each link. If the throughput exceeds a certain - threshold, the link will become highlighted. For wireless nodes which broadcast - data to all nodes in range, the throughput rate is displayed next to the node and - the node will become circled if the threshold is exceeded. + using statistics gathered from the ng_pipe Netgraph node that implements each + link. If the throughput exceeds a certain threshold, the link will become + highlighted. For wireless nodes which broadcast data to all nodes in range, + the throughput rate is displayed next to the node and the node will become + circled if the threshold is exceeded. #### Observer Widgets -These Widgets are available from the **Observer Widgets** submenu of the -**Widgets** menu, and from the Widgets Tool on the toolbar. Only one Observer Widget may +These Widgets are available from the *Observer Widgets* submenu of the +*Widgets* menu, and from the Widgets Tool on the toolbar. Only one Observer Widget may be used at a time. Mouse over a node while the session is running to pop up an informational display about that node. Available Observer Widgets include IPv4 and IPv6 routing tables, socket information, list of running processes, and OSPFv2/v3 neighbor information. -Observer Widgets may be edited by the user and rearranged. Choosing -**Widgets->Observer Widgets->Edit Observers** from the Observer Widget menu will -invoke the Observer Widgets dialog. A list of Observer Widgets is displayed along -with up and down arrows for rearranging the list. Controls are available for -renaming each widget, for changing the command that is run during mouse over, and -for adding and deleting items from the list. Note that specified commands should -return immediately to avoid delays in the GUI display. Changes are saved to a -**config.yaml** file in the CORE configuration directory. +Observer Widgets may be edited by the user and rearranged. Choosing *Edit...* +from the Observer Widget menu will invoke the Observer Widgets dialog. A list +of Observer Widgets is displayed along with up and down arrows for rearranging +the list. Controls are available for renaming each widget, for changing the +command that is run during mouse over, and for adding and deleting items from +the list. Note that specified commands should return immediately to avoid +delays in the GUI display. Changes are saved to a **widgets.conf** file in +the CORE configuration directory. ### Session Menu @@ -279,41 +306,214 @@ The Session Menu has entries for starting, stopping, and managing sessions, in addition to global options such as node types, comments, hooks, servers, and options. -| Option | Description | -|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Sessions | Invokes the CORE Sessions dialog box containing a list of active CORE sessions in the daemon. Basic session information such as name, node count, start time, and a thumbnail are displayed. This dialog allows connecting to different sessions, shutting them down, or starting a new session. | -| Servers | Invokes the CORE emulation servers dialog for configuring. | -| Options | Presents per-session options, such as the IPv4 prefix to be used, if any, for a control network the ability to preserve the session directory; and an on/off switch for SDT3D support. | -| Hooks | Invokes the CORE Session Hooks window where scripts may be configured for a particular session state. The session states are defined in the [table](#session-states) below. The top of the window has a list of configured hooks, and buttons on the bottom left allow adding, editing, and removing hook scripts. The new or edit button will open a hook script editing window. A hook script is a shell script invoked on the host (not within a virtual node). | +| Option | Description | +|---|---| +| Start or Stop | This starts or stops the emulation, performing the same function as the green Start or red Stop button. | +| Change sessions... | Invokes the CORE Sessions dialog box containing a list of active CORE sessions in the daemon. Basic session information such as name, node count, start time, and a thumbnail are displayed. This dialog allows connecting to different sessions, shutting them down, or starting a new session. | +| Node types... | Invokes the CORE Node Types dialog, performing the same function as the Edit button on the Network-Layer Nodes toolbar. | +| Comments... | Invokes the CORE Session Comments window where optional text comments may be specified. These comments are saved at the top of the configuration file, and can be useful for describing the topology or how to use the network. | +| Hooks... | Invokes the CORE Session Hooks window where scripts may be configured for a particular session state. The session states are defined in the [table](#session-states) below. The top of the window has a list of configured hooks, and buttons on the bottom left allow adding, editing, and removing hook scripts. The new or edit button will open a hook script editing window. A hook script is a shell script invoked on the host (not within a virtual node). | +| Reset node positions | If you have moved nodes around using the mouse or by using a mobility module, choosing this item will reset all nodes to their original position on the canvas. The node locations are remembered when you first press the Start button. | +| Emulation servers... | Invokes the CORE emulation servers dialog for configuring. | +| Change Sessions... | Invokes the Sessions dialog for switching between different running sessions. This dialog is presented during startup when one or more sessions are already running. | +| Options... | Presents per-session options, such as the IPv4 prefix to be used, if any, for a control network the ability to preserve the session directory; and an on/off switch for SDT3D support. | #### Session States -| State | Description | -|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Definition | Used by the GUI to tell the backend to clear any state. | -| Configuration | When the user presses the *Start* button, node, link, and other configuration data is sent to the backend. This state is also reached when the user customizes a service. | -| Instantiation | After configuration data has been sent, just before the nodes are created. | -| Runtime | All nodes and networks have been built and are running. (This is the same state at which the previously-named *global experiment script* was run.) | -| Datacollect | The user has pressed the *Stop* button, but before services have been stopped and nodes have been shut down. This is a good time to collect log files and other data from the nodes. | -| Shutdown | All nodes and networks have been shut down and destroyed. | +| State | Description | +|---|---| +| definition | Used by the GUI to tell the backend to clear any state. | +| configuration | When the user presses the *Start* button, node, link, and other configuration data is sent to the backend. This state is also reached when the user customizes a service. | +| instantiation | After configuration data has been sent, just before the nodes are created. | +| runtime | All nodes and networks have been built and are running. (This is the same state at which the previously-named *global experiment script* was run.) +| datacollect | The user has pressed the *Stop* button, but before services have been stopped and nodes have been shut down. This is a good time to collect log files and other data from the nodes. | +| shutdown | All nodes and networks have been shut down and destroyed. | ### Help Menu -| Option | Description | -|--------------------------|---------------------------------------------------------------| -| CORE Github (www) | Link to the CORE GitHub page. | -| CORE Documentation (www) | Lnk to the CORE Documentation page. | -| About | Invokes the About dialog box for viewing version information. | +| Option | Description | +|---|---| +| CORE Github (www) | Link to the CORE GitHub page. | +| CORE Documentation (www) | Lnk to the CORE Documentation page. | +| About | Invokes the About dialog box for viewing version information. | + +## Connecting with Physical Networks + +CORE's emulated networks run in real time, so they can be connected to live +physical networks. The RJ45 tool and the Tunnel tool help with connecting to +the real world. These tools are available from the *Link-layer nodes* menu. + +When connecting two or more CORE emulations together, MAC address collisions +should be avoided. CORE automatically assigns MAC addresses to interfaces when +the emulation is started, starting with **00:00:00:aa:00:00** and incrementing +the bottom byte. The starting byte should be changed on the second CORE machine +using the *MAC addresses...* option from the *Tools* menu. + +### RJ45 Tool + +The RJ45 node in CORE represents a physical interface on the real CORE machine. +Any real-world network device can be connected to the interface and communicate +with the CORE nodes in real time. + +The main drawback is that one physical interface is required for each +connection. When the physical interface is assigned to CORE, it may not be used +for anything else. Another consideration is that the computer or network that +you are connecting to must be co-located with the CORE machine. + +To place an RJ45 connection, click on the *Link-layer nodes* toolbar and select +the *RJ45 Tool* from the submenu. Click on the canvas near the node you want to +connect to. This could be a router, hub, switch, or WLAN, for example. Now +click on the *Link Tool* and draw a link between the RJ45 and the other node. +The RJ45 node will display "UNASSIGNED". Double-click the RJ45 node to assign a +physical interface. A list of available interfaces will be shown, and one may +be selected by double-clicking its name in the list, or an interface name may +be entered into the text box. + +> **NOTE:** When you press the Start button to instantiate your topology, the + interface assigned to the RJ45 will be connected to the CORE topology. The + interface can no longer be used by the system. + +Multiple RJ45 nodes can be used within CORE and assigned to the same physical +interface if 802.1x VLANs are used. This allows for more RJ45 nodes than +physical ports are available, but the (e.g. switching) hardware connected to +the physical port must support the VLAN tagging, and the available bandwidth +will be shared. + +You need to create separate VLAN virtual devices on the Linux host, +and then assign these devices to RJ45 nodes inside of CORE. The VLANning is +actually performed outside of CORE, so when the CORE emulated node receives a +packet, the VLAN tag will already be removed. + +Here are example commands for creating VLAN devices under Linux: + +```shell +ip link add link eth0 name eth0.1 type vlan id 1 +ip link add link eth0 name eth0.2 type vlan id 2 +ip link add link eth0 name eth0.3 type vlan id 3 +``` + +### Tunnel Tool + +The tunnel tool builds GRE tunnels between CORE emulations or other hosts. +Tunneling can be helpful when the number of physical interfaces is limited or +when the peer is located on a different network. Also a physical interface does +not need to be dedicated to CORE as with the RJ45 tool. + +The peer GRE tunnel endpoint may be another CORE machine or another +host that supports GRE tunneling. When placing a Tunnel node, initially +the node will display "UNASSIGNED". This text should be replaced with the IP +address of the tunnel peer. This is the IP address of the other CORE machine or +physical machine, not an IP address of another virtual node. + +> **NOTE:** Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices. The *gretap* device + has an interface MTU of 1,458 bytes; when joined to a Linux bridge, the + bridge's MTU + becomes 1,458 bytes. The Linux bridge will not perform fragmentation for + large packets if other bridge ports have a higher MTU such as 1,500 bytes. + +The GRE key is used to identify flows with GRE tunneling. This allows multiple +GRE tunnels to exist between that same pair of tunnel peers. A unique number +should be used when multiple tunnels are used with the same peer. When +configuring the peer side of the tunnel, ensure that the matching keys are +used. + +Here are example commands for building the other end of a tunnel on a Linux +machine. In this example, a router in CORE has the virtual address +**10.0.0.1/24** and the CORE host machine has the (real) address +**198.51.100.34/24**. The Linux box +that will connect with the CORE machine is reachable over the (real) network +at **198.51.100.76/24**. +The emulated router is linked with the Tunnel Node. In the +Tunnel Node configuration dialog, the address **198.51.100.76** is entered, with +the key set to **1**. The gretap interface on the Linux box will be assigned +an address from the subnet of the virtual router node, +**10.0.0.2/24**. + +```shell +# these commands are run on the tunnel peer +sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1 +sudo ip addr add 10.0.0.2/24 dev gt0 +sudo ip link set dev gt0 up +``` + +Now the virtual router should be able to ping the Linux machine: + +```shell +# from the CORE router node +ping 10.0.0.2 +``` + +And the Linux machine should be able to ping inside the CORE emulation: + +```shell +# from the tunnel peer +ping 10.0.0.1 +``` + +To debug this configuration, **tcpdump** can be run on the gretap devices, or +on the physical interfaces on the CORE or Linux machines. Make sure that a +firewall is not blocking the GRE traffic. + +### Communicating with the Host Machine + +The host machine that runs the CORE GUI and/or daemon is not necessarily +accessible from a node. Running an X11 application on a node, for example, +requires some channel of communication for the application to connect with +the X server for graphical display. There are several different ways to +connect from the node to the host and vice versa. + +#### Control Network + +The quickest way to connect with the host machine through the primary control +network. + +With a control network, the host can launch an X11 application on a node. +To run an X11 application on the node, the **SSH** service can be enabled on +the node, and SSH with X11 forwarding can be used from the host to the node. + +```shell +# SSH from host to node n5 to run an X11 app +ssh -X 172.16.0.5 xclock +``` + +Note that the **coresendmsg** utility can be used for a node to send +messages to the CORE daemon running on the host (if the **listenaddr = 0.0.0.0** +is set in the **/etc/core/core.conf** file) to interact with the running +emulation. For example, a node may move itself or other nodes, or change +its icon based on some node state. + +#### Other Methods + +There are still other ways to connect a host with a node. The RJ45 Tool +can be used in conjunction with a dummy interface to access a node: + +```shell +sudo modprobe dummy numdummies=1 +``` + +A **dummy0** interface should appear on the host. Use the RJ45 tool assigned +to **dummy0**, and link this to a node in your scenario. After starting the +session, configure an address on the host. + +```shell +sudo ip link show type bridge +# determine bridge name from the above command +# assign an IP address on the same network as the linked node +sudo ip addr add 10.0.1.2/24 dev b.48304.34658 +``` + +In the example shown above, the host will have the address **10.0.1.2** and +the node linked to the RJ45 may have the address **10.0.1.1**. ## Building Sample Networks ### Wired Networks -Wired networks are created using the **Link Tool** to draw a link between two +Wired networks are created using the *Link Tool* to draw a link between two nodes. This automatically draws a red line representing an Ethernet link and creates new interfaces on network-layer nodes. -Double-click on the link to invoke the **link configuration** dialog box. Here +Double-click on the link to invoke the *link configuration* dialog box. Here you can change the Bandwidth, Delay, Loss, and Duplicate rate parameters for that link. You can also modify the color and width of the link, affecting its display. @@ -329,42 +529,32 @@ The wireless LAN (WLAN) is covered in the next section. ### Wireless Networks -Wireless networks allow moving nodes around to impact the connectivity between them. Connections between a -pair of nodes is stronger when the nodes are closer while connection is weaker when the nodes are further away. -CORE offers several levels of wireless emulation fidelity, depending on modeling needs and available -hardware. +The wireless LAN node allows you to build wireless networks where moving nodes +around affects the connectivity between them. Connection between a pair of nodes is stronger +when the nodes are closer while connection is weaker when the nodes are further away. +The wireless LAN, or WLAN, node appears as a small cloud. The WLAN offers +several levels of wireless emulation fidelity, depending on your modeling needs. -* WLAN Node - * uses set bandwidth, delay, and loss - * links are enabled or disabled based on a set range - * uses the least CPU when moving, but nothing extra when not moving -* Wireless Node - * uses set bandwidth, delay, and initial loss - * loss dynamically changes based on distance between nodes, which can be configured with range parameters - * links are enabled or disabled based on a set range - * uses more CPU to calculate loss for every movement, but nothing extra when not moving -* EMANE Node - * uses a physical layer model to account for signal propagation, antenna profile effects and interference - sources in order to provide a realistic environment for wireless experimentation - * uses the most CPU for every packet, as complex calculations are used for fidelity - * See [Wiki](https://github.com/adjacentlink/emane/wiki) for details on general EMANE usage - * See [CORE EMANE](emane.md) for details on using EMANE in CORE +The WLAN tool can be extended with plug-ins for different levels of wireless +fidelity. The basic on/off range is the default setting available on all +platforms. Other plug-ins offer higher fidelity at the expense of greater +complexity and CPU usage. The availability of certain plug-ins varies depending +on platform. See the table below for a brief overview of wireless model types. -| Model | Type | Supported Platform(s) | Fidelity | Description | -|----------|--------|-----------------------|----------|-------------------------------------------------------------------------------| -| WLAN | On/Off | Linux | Low | Ethernet bridging with nftables | -| Wireless | On/Off | Linux | Medium | Ethernet bridging with nftables | -| EMANE | RF | Linux | High | TAP device connected to EMANE emulator with pluggable MAC and PHY radio types | -#### Example WLAN Network Setup +|Model|Type|Supported Platform(s)|Fidelity|Description| +|-----|----|---------------------|--------|-----------| +|Basic|on/off|Linux|Low|Ethernet bridging with ebtables| +|EMANE|Plug-in|Linux|High|TAP device connected to EMANE emulator with pluggable MAC and PHY radio types| To quickly build a wireless network, you can first place several router nodes onto the canvas. If you have the Quagga MDR software installed, it is -recommended that you use the **mdr** node type for reduced routing overhead. Next -choose the **WLAN** from the **Link-layer nodes** submenu. First set the +recommended that you use the *mdr* node type for reduced routing overhead. Next +choose the *wireless LAN* from the *Link-layer nodes* submenu. First set the desired WLAN parameters by double-clicking the cloud icon. Then you can link -all selected right-clicking on the WLAN and choosing **Link to Selected**. +all of the routers by right-clicking on the WLAN and choosing *Link to all +routers*. Linking a router to the WLAN causes a small antenna to appear, but no red link line is drawn. Routers can have multiple wireless links and both wireless and @@ -374,44 +564,31 @@ enables OSPFv3 with MANET extensions. This is a Boeing-developed extension to Quagga's OSPFv3 that reduces flooding overhead and optimizes the flooding procedure for mobile ad-hoc (MANET) networks. -The default configuration of the WLAN is set to use the basic range model. Having this model +The default configuration of the WLAN is set to use the basic range model, +using the *Basic* tab in the WLAN configuration dialog. Having this model selected causes **core-daemon** to calculate the distance between nodes based on screen pixels. A numeric range in screen pixels is set for the wireless -network using the **Range** slider. When two wireless nodes are within range of -each other, a green line is drawn between them and they are linked. Two +network using the *Range* slider. When two wireless nodes are within range of +each other, a green line is drawn between them and they are linked. Two wireless nodes that are farther than the range pixels apart are not linked. During Execute mode, users may move wireless nodes around by clicking and dragging them, and wireless links will be dynamically made or broken. -### Running Commands within Nodes - -You can double click a node to bring up a terminal for running shell commands. Within -the terminal you can run anything you like and those commands will be run in context of the node. -For standard CORE nodes, the only thing to keep in mind is that you are using the host file -system and anything you change or do can impact the greater system. By default, your terminal -will open within the nodes home directory for the running session, but it is temporary and -will be removed when the session is stopped. - -You can also launch GUI based applications from within standard CORE nodes, but you need to -enable xhost access to root. - -```shell -xhost +local:root -``` +The *EMANE* tab lists available EMANE models to use for wireless networking. +See the [EMANE](emane.md) chapter for details on using EMANE. ### Mobility Scripting CORE has a few ways to script mobility. -| Option | Description | -|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ns-2 script | The script specifies either absolute positions or waypoints with a velocity. Locations are given with Cartesian coordinates. | -| gRPC API | An external entity can move nodes by leveraging the gRPC API | +| Option | Description | +|---|---| +| ns-2 script | The script specifies either absolute positions or waypoints with a velocity. Locations are given with Cartesian coordinates. | +| CORE API | An external entity can move nodes by sending CORE API Node messages with updated X,Y coordinates; the **coresendmsg** utility allows a shell script to generate these messages. | | EMANE events | See [EMANE](emane.md) for details on using EMANE scripts to move nodes around. Location information is typically given as latitude, longitude, and altitude. | For the first method, you can create a mobility script using a text -editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate -the script with one of the wireless +editor, or using a tool such as [BonnMotion](http://net.cs.uni-bonn.de/wg/cs/applications/bonnmotion/), and associate the script with one of the wireless using the WLAN configuration dialog box. Click the *ns-2 mobility script...* button, and set the *mobility script file* field in the resulting *ns2script* configuration dialog. @@ -427,7 +604,7 @@ bm NSFile -f sample When the Execute mode is started and one of the WLAN nodes has a mobility script, a mobility script window will appear. This window contains controls for starting, stopping, and resetting the running time for the mobility script. The -**loop** checkbox causes the script to play continuously. The **resolution** text +*loop* checkbox causes the script to play continuously. The *resolution* text box contains the number of milliseconds between each timer event; lower values cause the mobility to appear smoother but consumes greater CPU time. @@ -451,28 +628,79 @@ accurate. Examples mobility scripts (and their associated topology files) can be found in the **configs/** directory. -## Alerts +## Multiple Canvases -The alerts button is located in the bottom right-hand corner -of the status bar in the CORE GUI. This will change colors to indicate one or -more problems with the running emulation. Clicking on the alerts button will invoke the -alerts dialog. +CORE supports multiple canvases for organizing emulated nodes. Nodes running on +different canvases may be linked together. -The alerts dialog contains a list of alerts received from -the CORE daemon. An alert has a time, severity level, optional node number, -and source. When the alerts button is red, this indicates one or more fatal -exceptions. An alert with a fatal severity level indicates that one or more +To create a new canvas, choose *New* from the *Canvas* menu. A new canvas tab +appears in the bottom left corner. Clicking on a canvas tab switches to that +canvas. Double-click on one of the tabs to invoke the *Manage Canvases* dialog +box. Here, canvases may be renamed and reordered, and you can easily switch to +one of the canvases by selecting it. + +Each canvas maintains its own set of nodes and annotations. To link between +canvases, select a node and right-click on it, choose *Create link to*, choose +the target canvas from the list, and from that submenu the desired node. A +pseudo-link will be drawn, representing the link between the two nodes on +different canvases. Double-clicking on the label at the end of the arrow will +jump to the canvas that it links. + +## Check Emulation Light (CEL) + +The |cel| Check Emulation Light, or CEL, is located in the bottom right-hand corner +of the status bar in the CORE GUI. This is a yellow icon that indicates one or +more problems with the running emulation. Clicking on the CEL will invoke the +CEL dialog. + +The Check Emulation Light dialog contains a list of exceptions received from +the CORE daemon. An exception has a time, severity level, optional node number, +and source. When the CEL is blinking, this indicates one or more fatal +exceptions. An exception with a fatal severity level indicates that one or more of the basic pieces of emulation could not be created, such as failure to create a bridge or namespace, or the failure to launch EMANE processes for an EMANE-based network. -Clicking on an alert displays details for that -exceptio. The exception source is a text string +Clicking on an exception displays details for that +exception. If a node number is specified, that node is highlighted on the +canvas when the exception is selected. The exception source is a text string to help trace where the exception occurred; "service:UserDefined" for example, would appear for a failed validation command with the UserDefined service. -A button is available at the bottom of the dialog for clearing the exception -list. +Buttons are available at the bottom of the dialog for clearing the exception +list and for viewing the CORE daemon and node log files. + +> **NOTE:** In batch mode, exceptions received from the CORE daemon are displayed on + the console. + +## Configuration Files + +Configurations are saved to **xml** or **.imn** topology files using +the *File* menu. You +can easily edit these files with a text editor. +Any time you edit the topology +file, you will need to stop the emulation if it were running and reload the +file. + +The **.imn** file format comes from IMUNES, and is +basically Tcl lists of nodes, links, etc. +Tabs and spacing in the topology files are important. The file starts by +listing every node, then links, annotations, canvases, and options. Each entity +has a block contained in braces. The first block is indented by four spaces. +Within the **network-config** block (and any *custom-*-config* block), the +indentation is one tab character. + +> **NOTE:** There are several topology examples included with CORE in + the **configs/** directory. + This directory can be found in **~/.core/configs**, or + installed to the filesystem + under **/usr[/local]/share/examples/configs**. + +> **NOTE:** When using the **.imn** file format, file paths for things like custom + icons may contain the special variables **$CORE_DATA_DIR** or **$CONFDIR** which + will be substituted with **/usr/share/core** or **~/.core/configs**. + +> **NOTE:** Feel free to edit the files directly using your favorite text editor. ## Customizing your Topology's Look @@ -495,3 +723,12 @@ A background image for the canvas may be set using the *Wallpaper...* option from the *Canvas* menu. The image may be centered, tiled, or scaled to fit the canvas size. An existing terrain, map, or network diagram could be used as a background, for example, with CORE nodes drawn on top. + +## Preferences + +The *Preferences* Dialog can be accessed from the **Edit_Menu**. There are +numerous defaults that can be set with this dialog, which are stored in the +**~/.core/prefs.conf** preferences file. + + + diff --git a/docs/hitl.md b/docs/hitl.md deleted file mode 100644 index b659a36f..00000000 --- a/docs/hitl.md +++ /dev/null @@ -1,127 +0,0 @@ -# Hardware In The Loop - -## Overview - -In some cases it may be impossible or impractical to run software using CORE -nodes alone. You may need to bring in external hardware into the network. -CORE's emulated networks run in real time, so they can be connected to live -physical networks. The RJ45 tool and the Tunnel tool help with connecting to -the real world. These tools are available from the **Link Layer Nodes** menu. - -When connecting two or more CORE emulations together, MAC address collisions -should be avoided. CORE automatically assigns MAC addresses to interfaces when -the emulation is started, starting with **00:00:00:aa:00:00** and incrementing -the bottom byte. The starting byte should be changed on the second CORE machine -using the **Tools->MAC Addresses** option the menu. - -## RJ45 Node - -CORE provides the RJ45 node, which represents a physical -interface within the host that is running CORE. Any real-world network -devices can be connected to the interface and communicate with the CORE nodes in real time. - -The main drawback is that one physical interface is required for each -connection. When the physical interface is assigned to CORE, it may not be used -for anything else. Another consideration is that the computer or network that -you are connecting to must be co-located with the CORE machine. - -### GUI Usage - -To place an RJ45 connection, click on the **Link Layer Nodes** toolbar and select -the **RJ45 Node** from the options. Click on the canvas, where you would like -the nodes to place. Now click on the **Link Tool** and draw a link between the RJ45 -and the other node you wish to be connected to. The RJ45 node will display "UNASSIGNED". -Double-click the RJ45 node to assign a physical interface. A list of available -interfaces will be shown, and one may be selected, then selecting **Apply**. - -!!! note - - When you press the Start button to instantiate your topology, the - interface assigned to the RJ45 will be connected to the CORE topology. The - interface can no longer be used by the system. - -### Multiple RJ45s with One Interface (VLAN) - -It is possible to have multiple RJ45 nodes using the same physical interface -by leveraging 802.1x VLANs. This allows for more RJ45 nodes than physical ports -are available, but the (e.g. switching) hardware connected to the physical port -must support the VLAN tagging, and the available bandwidth will be shared. - -You need to create separate VLAN virtual devices on the Linux host, -and then assign these devices to RJ45 nodes inside of CORE. The VLANing is -actually performed outside of CORE, so when the CORE emulated node receives a -packet, the VLAN tag will already be removed. - -Here are example commands for creating VLAN devices under Linux: - -```shell -ip link add link eth0 name eth0.1 type vlan id 1 -ip link add link eth0 name eth0.2 type vlan id 2 -ip link add link eth0 name eth0.3 type vlan id 3 -``` - -## Tunnel Tool - -The tunnel tool builds GRE tunnels between CORE emulations or other hosts. -Tunneling can be helpful when the number of physical interfaces is limited or -when the peer is located on a different network. In this case a physical interface does -not need to be dedicated to CORE as with the RJ45 tool. - -The peer GRE tunnel endpoint may be another CORE machine or another -host that supports GRE tunneling. When placing a Tunnel node, initially -the node will display "UNASSIGNED". This text should be replaced with the IP -address of the tunnel peer. This is the IP address of the other CORE machine or -physical machine, not an IP address of another virtual node. - -!!! note - - Be aware of possible MTU (Maximum Transmission Unit) issues with GRE devices. - The *gretap* device has an interface MTU of 1,458 bytes; when joined to a Linux - bridge, the bridge's MTU becomes 1,458 bytes. The Linux bridge will not perform - fragmentation for large packets if other bridge ports have a higher MTU such - as 1,500 bytes. - -The GRE key is used to identify flows with GRE tunneling. This allows multiple -GRE tunnels to exist between that same pair of tunnel peers. A unique number -should be used when multiple tunnels are used with the same peer. When -configuring the peer side of the tunnel, ensure that the matching keys are -used. - -### Example Usage - -Here are example commands for building the other end of a tunnel on a Linux -machine. In this example, a router in CORE has the virtual address -**10.0.0.1/24** and the CORE host machine has the (real) address -**198.51.100.34/24**. The Linux box -that will connect with the CORE machine is reachable over the (real) network -at **198.51.100.76/24**. -The emulated router is linked with the Tunnel Node. In the -Tunnel Node configuration dialog, the address **198.51.100.76** is entered, with -the key set to **1**. The gretap interface on the Linux box will be assigned -an address from the subnet of the virtual router node, -**10.0.0.2/24**. - -```shell -# these commands are run on the tunnel peer -sudo ip link add gt0 type gretap remote 198.51.100.34 local 198.51.100.76 key 1 -sudo ip addr add 10.0.0.2/24 dev gt0 -sudo ip link set dev gt0 up -``` - -Now the virtual router should be able to ping the Linux machine: - -```shell -# from the CORE router node -ping 10.0.0.2 -``` - -And the Linux machine should be able to ping inside the CORE emulation: - -```shell -# from the tunnel peer -ping 10.0.0.1 -``` - -To debug this configuration, **tcpdump** can be run on the gretap devices, or -on the physical interfaces on the CORE or Linux machines. Make sure that a -firewall is not blocking the GRE traffic. diff --git a/docs/index.md b/docs/index.md index 4afec59f..5814e141 100644 --- a/docs/index.md +++ b/docs/index.md @@ -4,15 +4,46 @@ CORE (Common Open Research Emulator) is a tool for building virtual networks. As an emulator, CORE builds a representation of a real computer network that runs in real time, as opposed to simulation, where abstract models are -used. The live-running emulation can be connected to physical networks and routers. It provides an environment for +used. The live-running emulation can be connected to physical networks and routers. It provides an environment for running real applications and protocols, taking advantage of tools provided by the Linux operating system. CORE is typically used for network and protocol research, demonstrations, application and platform testing, evaluating networking scenarios, security studies, and increasing the size of physical test networks. ### Key Features - * Efficient and scalable * Runs applications and protocols without modification * Drag and drop GUI * Highly customizable + +## Topics + +| Topic | Description| +|-------|------------| +|[Architecture](architecture.md)|Overview of the architecture| +|[Installation](install.md)|How to install CORE and its requirements| +|[GUI](gui.md)|How to use the GUI| +|[(BETA) Python GUI](pygui.md)|How to use the BETA python based GUI| +|[Python API](python.md)|Covers how to control core directly using python| +|[gRPC API](grpc.md)|Covers how control core using gRPC| +|[Distributed](distributed.md)|Details for running CORE across multiple servers| +|[Node Types](nodetypes.md)|Overview of node types supported within CORE| +|[CTRLNET](ctrlnet.md)|How to use control networks to communicate with nodes from host| +|[Services](services.md)|Overview of provided services and creating custom ones| +|[EMANE](emane.md)|Overview of EMANE integration and integrating custom EMANE models| +|[Performance](performance.md)|Notes on performance when using CORE| +|[Developers Guide](devguide.md)|Overview on how to contribute to CORE| + +## Credits + +The CORE project was derived from the open source IMUNES project from the University of Zagreb in 2004. In 2006, +changes for CORE were released back to that project, some items of which were adopted. Marko Zec is the +primary developer from the University of Zagreb responsible for the IMUNES (GUI) and VirtNet (kernel) projects. Ana +Kukec and Miljenko Mikuc are known contributors. + +Jeff Ahrenholz has been the primary Boeing developer of CORE, and has written this manual. Tom Goff designed the +Python framework and has made significant contributions. Claudiu Danilov, Rod Santiago, Kevin Larson, Gary Pei, +Phil Spagnolo, and Ian Chakeres have contributed code to CORE. Dan Mackley helped develop the CORE API, originally to +interface with a simulator. Jae Kim and Tom Henderson have supervised the project and provided direction. + +Copyright (c) 2005-2020, the Boeing Company. diff --git a/docs/install.md b/docs/install.md index 51c05dbc..7c5ebb84 100644 --- a/docs/install.md +++ b/docs/install.md @@ -1,84 +1,103 @@ # Installation - -!!! warning - - If Docker is installed, the default iptable rules will block CORE traffic +* Table of Contents +{:toc} ## Overview +CORE provides a script to help automate the installation of dependencies, +build and install, and either generate a CORE specific python virtual environment +or build and install a python wheel. -CORE currently supports and provides the following installation options, with the package -option being preferred. - -* [Package based install (rpm/deb)](#package-based-install) -* [Script based install](#script-based-install) -* [Dockerfile based install](#dockerfile-based-install) +> **WARNING:** if Docker is installed, the default iptable rules will block CORE traffic ### Requirements - Any computer capable of running Linux should be able to run CORE. Since the physical machine will be hosting numerous containers, as a general rule you should select a machine having as much RAM and CPU resources as possible. * Linux Kernel v3.3+ * iproute2 4.5+ is a requirement for bridge related commands -* nftables compatible kernel and nft command line tool +* ebtables not backed by nftables ### Supported Linux Distributions - Plan is to support recent Ubuntu and CentOS LTS releases. Verified: +* Ubuntu - 18.04, 20.04 +* CentOS - 7.8, 8.0* -* Ubuntu - 18.04, 20.04, 22.04 -* CentOS - 7.8 +> **NOTE:** Ubuntu 20.04 requires installing legacy ebtables for WLAN +> functionality + +> **NOTE:** CentOS 8 does not provide legacy ebtables support, WLAN will not +> function properly + +> **NOTE:** CentOS 8 does not have the netem kernel mod available by default + +CentOS 8 Enabled netem: +```shell +sudo yum update +# restart into updated kernel +sudo yum install -y kernel-modules-extra +sudo modprobe sch_netem +``` + +### Tools Used +The following tools will be leveraged during installation: + +|Tool|Description| +|---|---| +|[pip](https://pip.pypa.io/en/stable/)|used to install pipx| +|[pipx](https://pipxproject.github.io/pipx/)|used to install standalone python tools (invoke, poetry)| +|[invoke](http://www.pyinvoke.org/)|used to run provided tasks (install, uninstall, reinstall, etc)| +|[poetry](https://python-poetry.org/)|used to install python virtual environment or building a python wheel| ### Files +The following is a list of files that would be installed after running the automated installation. -The following is a list of files that would be installed after installation. +> **NOTE:** the default install prefix is /usr/local, but can be changed as noted below -* executables - * `/bin/{vcmd, vnode}` - * can be adjusted using script based install , package will be /usr +* executable files + * `/bin/{core-daemon, core-gui, vcmd, vnoded, etc}` +* tcl/tk gui files + * `/lib/core` + * `/share/core/icons` +* example imn files + * `/share/core/examples` * python files - * virtual environment `/opt/core/venv` - * local install will be local to the python version used - * `python3 -c "import core; print(core.__file__)"` - * scripts {core-daemon, core-cleanup, etc} - * virtualenv `/opt/core/venv/bin` - * local `/usr/local/bin` + * poetry virtual env + * `cd /daemon && poetry env info` + * `~/.cache/pypoetry/virtualenvs/` + * local python install + * default install path for python3 installation of a wheel + * `python3 -c "import core; print(core.__file__)"` * configuration files - * `/etc/core/{core.conf, logging.conf}` -* ospf mdr repository files when using script based install - * `/../ospf-mdr` + * `/etc/core/{core.conf, logging.conf}` +* ospf mdr repository files + * `/../ospf-mdr` +* emane repository files + * `/../emane` -### Installed Scripts +### Installed Executables +After the installation complete it will have installed the following scripts. -The following python scripts are provided. - -| Name | Description | -|---------------------|------------------------------------------------------------------------------| -| core-cleanup | tool to help removed lingering core created containers, bridges, directories | -| core-cli | tool to query, open xml files, and send commands using gRPC | -| core-daemon | runs the backed core server providing a gRPC API | -| core-gui | starts GUI | -| core-python | provides a convenience for running the core python virtual environment | -| core-route-monitor | tool to help monitor traffic across nodes and feed that to SDT | -| core-service-update | tool to update automate modifying a legacy service to match current naming | - -### Upgrading from Older Release +| Name | Description | +|---|---| +| core-cleanup | tool to help removed lingering core created containers, bridges, directories | +| core-cli | tool to query, open xml files, and send commands using gRPC | +| core-daemon | runs the backed core server providing TLV and gRPC APIs | +| core-gui | runs the legacy tcl/tk based GUI | +| core-imn-to-xml | tool to help automate converting a .imn file to .xml format | +| core-manage | tool to add, remove, or check for services, models, and node types | +| core-pygui | runs the new python/tk based GUI | +| core-python | provides a convenience for running the core python virtual environment | +| core-route-monitor | tool to help monitor traffic across nodes and feed that to SDT | +| core-service-update | tool to update automate modifying a legacy service to match current naming | +| coresendmsg | tool to send TLV API commands from command line | +## Upgrading from Older Release Please make sure to uninstall any previous installations of CORE cleanly before proceeding to install. -Clearing out a current install from 7.0.0+, making sure to provide options -used for install (`-l` or `-p`). - -```shell -cd -inv uninstall -``` - -Previous install was built from source for CORE release older than 7.0.0: - +Previous install was built from source: ```shell cd sudo make uninstall @@ -87,7 +106,6 @@ make clean ``` Installed from previously built packages: - ```shell # centos sudo yum remove core @@ -95,313 +113,174 @@ sudo yum remove core sudo apt remove core ``` -## Installation Examples - -The below links will take you to sections providing complete examples for installing -CORE and related utilities on fresh installations. Otherwise, a breakdown for installing -different components and the options available are detailed below. - -* [Ubuntu 22.04](install_ubuntu.md) -* [CentOS 7](install_centos.md) - -## Package Based Install - -Starting with 9.0.0 there are pre-built rpm/deb packages. You can retrieve the -rpm/deb package from [releases](https://github.com/coreemu/core/releases) page. - -The built packages will require and install system level dependencies, as well as running -a post install script to install the provided CORE python wheel. A similar uninstall script -is ran when uninstalling and would require the same options as given, during the install. - -!!! note - - PYTHON defaults to python3 for installs below, CORE requires python3.9+, pip, - tk compatibility for python gui, and venv for virtual environments - -Examples for install: - -```shell -# recommended to upgrade to the latest version of pip before installation -# in python, can help avoid building from source issues -sudo -m pip install --upgrade pip -# install vcmd/vnoded, system dependencies, -# and core python into a venv located at /opt/core/venv -sudo install -y ./ -# disable the venv and install to python directly -sudo NO_VENV=1 install -y ./ -# change python executable used to install for venv or direct installations -sudo PYTHON=python3.9 install -y ./ -# disable venv and change python executable -sudo NO_VENV=1 PYTHON=python3.9 install -y ./ -# skip installing the python portion entirely, as you plan to carry this out yourself -# core python wheel is located at /opt/core/core--py3-none-any.whl -sudo NO_PYTHON=1 install -y ./ -# install python wheel into python of your choosing -sudo -m pip install /opt/core/core--py3-none-any.whl -``` - -Example for removal, requires using the same options as install: - -```shell -# remove a standard install -sudo remove core -# remove a local install -sudo NO_VENV=1 remove core -# remove install using alternative python -sudo PYTHON=python3.9 remove core -# remove install using alternative python and local install -sudo NO_VENV=1 PYTHON=python3.9 remove core -# remove install and skip python uninstall -sudo NO_PYTHON=1 remove core -``` - -### Installing OSPF MDR - -You will need to manually install OSPF MDR for routing nodes, since this is not -provided by the package. - -```shell -git clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git -cd ospf-mdr -./bootstrap.sh -./configure --disable-doc --enable-user=root --enable-group=root \ - --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \ - --localstatedir=/var/run/quagga -make -j$(nproc) -sudo make install -``` - -When done see [Post Install](#post-install). - -## Script Based Install - -The script based installation will install system level dependencies, python library and -dependencies, as well as dependencies for building CORE. - -The script based install also automatically builds and installs OSPF MDR, used by default -on routing nodes. This can optionally be skipped. - -Installaion will carry out the following steps: - +## Automated Install +The automated install will do the following: +* install base tools needed for installation + * python3, pip, pipx, invoke, poetry * installs system dependencies for building core -* builds vcmd/vnoded and python grpc files -* installs core into poetry managed virtual environment or locally, if flag is passed -* installs systemd service pointing to appropriate python location based on install type * clone/build/install working version of [OPSF MDR](https://github.com/USNavalResearchLaboratory/ospf-mdr) +* installs core into poetry managed virtual environment or locally, if flag is passed +* installs scripts pointing pointing to appropriate python location based on install type +* installs systemd service pointing to appropriate python location based on install type -!!! note +After installation has completed you should be able to run `core-daemon` and `core-gui`. - Installing locally comes with its own risks, it can result it potential - dependency conflicts with system package manager installed python dependencies +> **NOTE:** installing locally comes with its own risks, it can result it potential +> dependency conflicts with system package manager installed python dependencies -!!! note - - Provide a prefix that will be found on path when running as sudo, - if the default prefix /usr/local will not be valid - -The following tools will be leveraged during installation: - -| Tool | Description | -|---------------------------------------------|-----------------------------------------------------------------------| -| [pip](https://pip.pypa.io/en/stable/) | used to install pipx | -| [pipx](https://pipxproject.github.io/pipx/) | used to install standalone python tools (invoke, poetry) | -| [invoke](http://www.pyinvoke.org/) | used to run provided tasks (install, uninstall, reinstall, etc) | -| [poetry](https://python-poetry.org/) | used to install python virtual environment or building a python wheel | - -First we will need to clone and navigate to the CORE repo. +> **NOTE:** provide a prefix that will be found on path when running as sudo, +> if the default prefix /usr/local will not be valid +`install.sh` will attempt to determine your OS by way of `/etc/os-release`, currently it supports +attempts to install OSs that are debian/redhat like (yum/apt). ```shell # clone CORE repo git clone https://github.com/coreemu/core.git cd core -# install dependencies to run installation task -./setup.sh -# skip installing system packages, due to using python built from source -NO_SYSTEM=1 ./setup.sh +# script usage: install.sh [-v] [-d] [-l] [-p ] +# +# -v enable verbose install +# -d enable developer install +# -l enable local install, not compatible with developer install +# -p install prefix, defaults to /usr/local -# run the following or open a new terminal -source ~/.bashrc +# install core to virtual environment +./install.sh -p -# Ubuntu -inv install -# CentOS -inv install -p /usr -# optionally skip python system packages -inv install --no-python -# optionally skip installing ospf mdr -inv install --no-ospf - -# install command options -Usage: inv[oke] [--core-opts] install [--options] [other tasks here ...] - -Docstring: - install core, poetry, scripts, service, and ospf mdr - -Options: - -d, --dev install development mode - -i STRING, --install-type=STRING used to force an install type, can be one of the following (redhat, debian) - -l, --local determines if core will install to local system, default is False - -n, --no-python avoid installing python system dependencies - -o, --[no-]ospf disable ospf installation - -p STRING, --prefix=STRING prefix where scripts are installed, default is /usr/local - -v, --verbose +# install core locally +./install.sh -p -l ``` -When done see [Post Install](#post-install). - ### Unsupported Linux Distribution - For unsupported OSs you could attempt to do the following to translate an installation to your use case. -* make sure you have python3.9+ with venv support +* make sure you have python3.6+ with venv support * make sure you have python3 invoke available to leverage `/tasks.py` ```shell +cd + +# Usage: inv[oke] [--core-opts] install [--options] [other tasks here ...] +# +# Docstring: +# install core, poetry, scripts, service, and ospf mdr +# +# Options: +# -d, --dev install development mode +# -i STRING, --install-type=STRING +# -l, --local determines if core will install to local system, default is False +# -p STRING, --prefix=STRING prefix where scripts are installed, default is /usr/local +# -v, --verbose enable verbose + +# install virtual environment +inv install -p + +# indstall locally +inv install -p -l + # this will print the commands that would be ran for a given installation # type without actually running them, they may help in being used as # the basis for translating to your OS inv install --dry -v -p -i ``` -## Dockerfile Based Install - -You can leverage one of the provided Dockerfiles, to run and launch CORE within a Docker container. - -Since CORE nodes will leverage software available within the system for a given use case, -make sure to update and build the Dockerfile with desired software. +## Running User Scripts +If you create your own python scripts to run CORE directly or using the gRPC/TLV +APIs you will need to make sure you are running them within context of the +installed virtual environment. To help support this CORE provides the `core-python` +executable. This executable will allow you to enter CORE's python virtual +environment interpreter or to run a script within it. +For installations installed to a virtual environment: ```shell -# clone core -git clone https://github.com/coreemu/core.git -cd core -# build image -sudo docker build -t core -f dockerfiles/Dockerfile. . -# start container -sudo docker run -itd --name core -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --privileged core -# enable xhost access to the root user -xhost +local:root -# launch core-gui -sudo docker exec -it core core-gui +core-python