core-extra/doc/intro.rst

204 lines
8.1 KiB
ReStructuredText

.. This file is part of the CORE Manual
(c)2012-2013 the Boeing Company
.. _Introduction:
************
Introduction
************
The Common Open Research Emulator (CORE) is a tool for building virtual
networks. As an emulator, CORE builds a representation of a real computer
network that runs in real time, as opposed to simulation, where abstract models
are used. The live-running emulation can be connected to physical networks and
routers. It provides an environment for running real applications and
protocols, taking advantage of virtualization provided by the Linux operating
system.
Some of its key features are:
.. index::
single: key features
* efficient and scalable
* runs applications and protocols without modification
* easy-to-use GUI
* highly customizable
CORE is typically used for network and protocol research,
demonstrations, application and platform testing, evaluating networking
scenarios, security studies, and increasing the size of physical test networks.
What's New?
===========
For readers who are already familiar with CORE and have read this manual before, below is a list of what changed in version 5.0:
* :ref:`Services` - Added Ryu SD and Open vSwitch services
* :ref:`Python_Scripting` - Updated script examples to reflect code changes
.. index::
single: CORE; components of
single: CORE; API
single: API
single: CORE; GUI
.. _Architecture:
Architecture
============
The main components of CORE are shown in :ref:`core-architecture`. A
*CORE daemon* (backend) manages emulation sessions. It builds emulated networks
using kernel virtualization for virtual nodes and some form of bridging and
packet manipulation for virtual networks. The nodes and networks come together
via interfaces installed on nodes. The daemon is controlled via the
graphical user interface, the *CORE GUI* (frontend).
The daemon uses Python modules
that can be imported directly by Python scripts.
The GUI and the daemon communicate using a custom,
asynchronous, sockets-based API, known as the *CORE API*. The dashed line
in the figure notionally depicts the user-space and kernel-space separation.
The components the user interacts with are colored blue: GUI, scripts, or
command-line tools.
The system is modular to allow mixing different components. The virtual
networks component, for example, can be realized with other network
simulators and emulators, such as ns-3 and EMANE.
Another example is how a session can be designed and started using
the GUI, and continue to run in "headless" operation with the GUI closed.
The CORE API is sockets based,
to allow the possibility of running different components on different physical
machines.
.. _core-architecture:
.. figure:: figures/core-architecture.*
:alt: CORE architecture diagram
:align: center
:scale: 75 %
CORE Architecture
The CORE GUI is a Tcl/Tk program; it is started using the command
``core-gui``. The CORE daemon, named ``core-daemon``,
is usually started via the init script
(``/etc/init.d/core-daemon`` or ``core-daemon.service``,
depending on platform.)
The CORE daemon manages sessions of virtual
nodes and networks, of which other scripts and utilities may be used for
further control.
.. _How_Does_It_Work?:
How Does it Work?
=================
A CORE node is a lightweight virtual machine. The CORE framework runs on Linux.
.. index::
single: Linux; virtualization
single: Linux; containers
single: LXC
single: network namespaces
* :ref:`Linux` CORE uses Linux network namespace virtualization to build virtual nodes, and ties them together with virtual networks using Linux Ethernet bridging.
.. _Linux:
Linux
-----
Linux network namespaces (also known as netns, LXC, or `Linux containers
<http://lxc.sourceforge.net/>`_) is the primary virtualization
technique used by CORE. LXC has been part of the mainline Linux kernel since
2.6.24. Recent Linux distributions such as Fedora and Ubuntu have
namespaces-enabled kernels out of the box.
A namespace is created using the ``clone()`` system call. Each namespace has
its own process environment and private network stack. Network namespaces
share the same filesystem in CORE.
.. index::
single: Linux; bridging
single: Linux; networking
single: ebtables
CORE combines these namespaces with Linux Ethernet bridging
to form networks. Link characteristics are applied using Linux Netem queuing
disciplines. Ebtables is Ethernet frame filtering on Linux bridges. Wireless
networks are emulated by controlling which interfaces can send and receive with
ebtables rules.
.. index::
single: IMUNES
single: VirtNet
single: prior work
.. rubric:: Footnotes
.. [#f1] http://www.nlnet.nl/project/virtnet/
.. [#f2] http://www.imunes.net/virtnet/
.. _Prior_Work:
Prior Work
==========
The Tcl/Tk CORE GUI was originally derived from the open source
`IMUNES <http://www.tel.fer.hr/imunes/>`_
project from the University of Zagreb
as a custom project within Boeing Research and Technology's Network
Technology research group in 2004. Since then they have developed the CORE
framework to use Linux virtualization, have developed a
Python framework, and made numerous user- and kernel-space developments, such
as support for wireless networks, IPsec, the ability to distribute emulations,
simulation integration, and more. The IMUNES project also consists of userspace
and kernel components. Originally, one had to download and apply a patch for
the FreeBSD 4.11 kernel, but the more recent
`VirtNet <http://www.nlnet.nl/project/virtnet/>`_
effort has brought network stack
virtualization to the more modern FreeBSD 8.x kernel.
.. _Open_Source_Project_and_Resources:
Open Source Project and Resources
=================================
.. index::
single: open source project
single: license
single: website
single: supplemental website
single: contributing
CORE has been released by Boeing to the open source community under the BSD
license. If you find CORE useful for your work, please contribute back to the
project. Contributions can be as simple as reporting a bug, dropping a line of
encouragement or technical suggestions to the mailing lists, or can also
include submitting patches or maintaining aspects of the tool. For contributing to
CORE, please visit the
`CORE GitHub <https://github.com/coreemu/core>`_.
Besides this manual, there are other additional resources available online:
* `CORE website <http://www.nrl.navy.mil/itd/ncs/products/core>`_ - main project page containing demos, downloads, and mailing list information.
.. index::
single: CORE
Goals
-----
These are the Goals of the CORE project; they are similar to what we consider to be the :ref:`key features <Introduction>`.
#. Ease of use - In a few clicks the user should have a running network.
#. Efficiency and scalability - A node is more lightweight than a full virtual machine. Tens of nodes should be possible on a standard laptop computer.
#. Software re-use - Re-use real implementation code, protocols, networking stacks.
#. Networking - CORE is focused on emulating networks and offers various ways to connect the running emulation with real or simulated networks.
#. Hackable - The source code is available and easy to understand and modify.
Non-Goals
---------
This is a list of Non-Goals, specific things that people may be interested in but are not areas that we will pursue.
#. Reinventing the wheel - Where possible, CORE reuses existing open source components such as virtualization, Netgraph, netem, bridging, Quagga, etc.
#. 1,000,000 nodes - While the goal of CORE is to provide efficient, scalable network emulation, there is no set goal of N number of nodes. There are realistic limits on what a machine can handle as its resources are divided amongst virtual nodes. We will continue to make things more efficient and let the user determine the right number of nodes based on available hardware and the activities each node is performing.
#. Solves every problem - CORE is about emulating networking layers 3-7 using virtual network stacks in Linux operating systems.
#. Hardware-specific - CORE itself is not an instantiation of hardware, a testbed, or a specific laboratory setup; it should run on commodity laptop and desktop PCs, in addition to high-end server hardware.