CentMesh logo

Programmable Image Architecture

(Most of the content below is somewhat dated and assumes the Programmable Image is the one and only "CentMesh image". Other than this, the description is accurate, though slightly incomplete in parts.)

The design goals and plan drove the architecture of CentMesh. Since CentMesh was envisioned as an outdoor testbed over an extended area, there would be no wired backbone, all data, control and management must be wireless. Instead of using licensed spectrum and ensuing cost, CentMesh was envisioned to be 802.11 link based. We planned to use simple commodity hardware for our platform. Similarly, we would use open source software, and develop open source software ourselves. We needed to modularize and cleanly separate functionality whenever possible. An express goal was to separate data, control, management planes, and allow plugging in/out specific software modules.

Software Components

We started exclusively with open source software, to make it easier to adopt and re-use the software produced by WolfRad for other groups that wish to do so in the future. We use the Linux OS, and open source drivers as much as possible. We use IP/802.11 for our data plane to begin with, and entirely design and implement our own signaling and knowledge planes. We will investigate the possibility of using MPLS as an option for our data plane later. The software we develop on top of this is structured as shown in the Figure. The software architecture is described considerably more extensively in project documentation we have produced, avaiable through the CentMesh Wiki.

We have adopted a centralized controller strategy for distributing policy and controlling experiments. A special software module called the "communicator" channels all control signals. This enables us to embed control plane policies such as spectrum usage in a single place. The communicator uses TCP connections to transmit control signals. The decision of what signals should go where is mediated by an automatic pub/sub mechanism: modules interested in receiving control signals on a particular topic subscribe to that topic, and the entity generating the control signals only has to publish them to specific topics to reach the current set of interested listeners.

The controller node contains a data repository, either realized in XML or MySQL; we have tested both solutions and are currently working with MySQL. The data repository acts as a decoupling intermediary between various data collection/dissemination procedures. Management software modules that embed policies are implemented in general as paired managers and agents that communicate to each other through the communicator. Managers reside on the central control node and interact with data repository. For example, a link state monitoring manager would receive periodic updates from its agents, and store this data into the repository; a routing manager would periodically (or as required) obtain this data from the repository and use it as input for routing algorithms. The results of the routing would in turn be distributed to the routing agents via the controllers. Agents implement the policy at individual nodes; e.g. the routing agent would rewrite IP forwarding tables (or switching tables), the power control agent would reconfigure the virtual interfaces provided by Atheros cards, etc. CentMesh Google Earth visualization

Neighbor discovery and bootstrap routing (possibly with hardcoded routes) are considered system modules, which can be replaced but not eliminated. Other core modules for which we implement simple sample strategies are routing, channel assignment. We plan to soon add power control, and coarse-grain scheduling. These can then be used by researchers as templates in coding their own strategies. Note that this centralized approach to managing the testbed does not preclude experimentation with distributed policy mechanisms; the researcher has only to embed the distributed algorithm in the agents, and use the management module merely to trigger their operation.

Servers for typical OAM tasks, such as node status monitoring, also attach to the data repository. Clients to these servers may run on the control node or elsewhere on Internet, since the control node is assumed to be connected by an infrastructure wired network. This allows a researcher to connect to the testbed from a remote location. Currently, we have created an OAM application that showcases this functionality; the Visualization Server, which presents the data in the repository regarding nodes, interfaces, links, routes, into a KML file, suitable for viewing in Google Earth (from any client that can access the controller node). CentMesh Google Earth visualization

Hardware Components

Our basic nodes feature a combination of IEEE 802.11 cards and are envisaged to soon incorporate Universal Software Radio Peripherals (USRP) - the 802.11 cards are sufficiently flexible for most networking experiments and have an excellent performance/cost ratio, while the USRPs offer incredible exibility at the lower layers (inaccessible to 802.11 cards). Several (2-4) 802.11 cards will be available in each node, allowing for a large variety of experiments, and mirroring trends in commercial mesh networking products. For the hardware we primarily rely on stock, othe-shelf components for reasons of cost, flexibility, and the ability to leverage existing software tools for configuring and managing the network.

Thus, for the CPU of the node we use desktop PCs with relatively good performance, with processors on the order of 3-4GHz, 4GB of RAM and 250GB HDDs. While for many experiments a lower-end PC might suffice, it is important that the PC can process all logging data without slowing down the experiment. For some tasks, e.g., listening in promiscuous mode and saving all packets on all interfaces, or for processing GNU Radio signals, a high-performance computer is mandatory.

The wireless cards are chosen to maximize flexibility: the Atheros and the Intel cards featuring open source drivers allow the researchers access to many low-level features not available in other cards. In particular, the Atheros cards are closer to a software defined radio in design, with much of the 802.11 functionality relegated to the software driver - all time-critical operations are of course implemented in hardware, but even there, many of the parameters of those operations are available through driver-accessible variables. We use IEEE 802.11 a/b/g or a/b/g/n cards.

Our current set of hardware components, and those we have experimented with in the past, are described in detail on the CentMesh Wiki.

This live Google Earth visualization of the testbed is driven by the Monitoring Agents and the Monitoring Manager (see "Architecture" for details). Testbeds being what they are, the visualization may not be available at certain times, of course. Naturally, it is only available when the Programmable image (or another image derived from it that retains this functionality) is running on the nodes.

If you do not have Google Earth installed but have the plugin, you may be able to load the visualization here.


CentMesh logo