On this page we offer a guide how to add your own overlay protocol implementation to OverSim.
Other good starting point you could look at:
- The source code of the Chord protocol in src/overlay/chord/Chord.cc
Be careful:
This documentation covers only OMNeT++-3.3 compatible versions of OverSim !!
It is still under development and may be partly incorrect or outdated !!
---
1. Implementing new overlay modules in OverSim
This is a guide to implement new overlay modules for OverSim. It is aimed to be an introductory guide, for more in-depth information you should consult the doxygen documentation. This guide is divided as follows. In order to understand the place an overlay module takes inside nodes and how it interacts with them, first the module hierarchy for overlay hosts is described. Then we proceed to explain how overlay modules should declared using the NED language. Next, the basics for the module implementation are described using the base class BaseOverlay. Finally we explain how to make OverSim compile and run your module.
2. OverSim nodes
OverSim nodes are the equivalent of individual terminals in the simulation environment. Nodes are based on a multi-tiered module hierarchy similar to OSI network layers (e.g. transport, overlay, application). The most used node type is called SimpleOverlayHost, and its tiers are structured as follows:
The UDP module is an implementation of the transport layer, and is in charge of communication between nodes. Above it is the overlay (KBR) tier, where the overlay module will be located. It communicates with other overlay nodes through the UDP layer, and exposes its services to the application layer above it. On the third tier is the application layer, which uses the services provided by the overlay module. Application modules may also use UDP directly if necessary. Other tiers may be activated above the application if required, these connect to the tier below and to the UDP module.
2.1 Module declaration
For module declarations, OverSim uses the NED language, a topology description language. Modules should be declared in their own NED files (with extension .ned), and the file name should match the module name.
Following is a declaration for an example overlay module called MyOverlay in myOverlay.ned:
simple MyOverlay parameters: myParam1 : int, myParam2 : string, debugOutput: bool; // obligatory! gates: in: from_udp; // gate from the UDP layer out: to_udp; // gate to the UDP layer in: from_app; // gate from the application out: to_app; // gate to the application in: direct_in; // gate for sendDirect endsimple
The module declaration is divided in two subsections: parameters and gates. The parameters subsection contains custom parameters established by the user, while the gates subsection establishes the connections to the other layers: from_udp and to_udp to the UDP layer, and from_app and to_app to the application layer. The direct_in gate is used for e.g. internal RPCs.
Modules can be nested inside one another. Modules without any inner modules are called simple modules, and are declared using the keyword simple. Only simple modules can have its behaviour customized with C++ (see Section 3). On the other hand, modules with inner nested modules are called compound modules, and act only as containers; they are declared with the keyword module.
An example compound module is as follows:
module MyOverlayContainer gates: in: from_udp; // gate from the UDP layer out: to_udp; // gate to the UDP layer in: from_app; // gate from the application out: to_app; // gate to the application submodules: innerOverlay: MyOverlay; connections nocheck: // connect our gates with the inner module from_udp --> innerOveray.from_udp; to_udp <-- innerOverlay.to_udp; from_app --> innerOverlay.from_app; to_app <-- innerOverlay.to_app; endmodule
2.2 Setting parameters
The user can set custom values for module parameters in the file omnetpp.ini, or in default.ini for general default values, both located in the Simulation folder (see Section 4). Parameters are hierarchic, separated by dots for each layer. For example, setting a parameter for a specific overlay module in a node can be done as follows:
SimpleOverlay.overlayTerminal[5].overlay.myProt.myParam1 = 1024
In this example, we are working with the network SimpleOverlay. From there, we need node number 5, and then its overlay module. For that module, we'll set the parameter myParam1 to 1024. This case, however, is too specific. We may not always work with the SimpleOverlay network. Or we may need that parameter to be set for all nodes. For those cases the wildcards * and are of use. For example:
*.overlayTerminal[5].overlay.myProt.myParam1 = 1024 **.overlay.myProt.myParam1 = 1024
- replaces exactly one step of the hierarchy (or a part of a name), while replaces any number of steps. In the first case, * means that the parameter is set for any network (first step). In the second case, means the parameter is set for any network, and any node in it (first and second steps). Wildcards should be used sparingly, since it makes complicated for other users to calculate the scope, and may end up causing unexpected results (including rewriting other parameters).
Should a module parameter not be set in either omnetpp.ini or default.ini, or match any wildcard, OverSim will prompt the user to enter a value for each instance of the module. For simulations with a big amount of nodes, setting each parameter individually quickly becomes overwhelming. Therefore, it is recommended that every module parameter be assigned a default value in default.ini.
3. Implementation using BaseOverlay
Overlay modules are implemented in C++ and should derive from the class BaseOverlay, which contains the necessary interface to work with other OverSim modules.
3.1 Important attributes
cModule *thisTerminal;
Pointer to the overlay module.
NodeHandle thisNode;
Information about the overlay node (IP address, port and overlay key).
BootstrapOracle* bootstrapOracle;
Pointer to a database of registered overlay nodes, to be used for bootstrapping.
NotificationBoard* notificationBoard;
Pointer to the notification board, which is used to generate node events.
3.2 Initialization and finalization
When the module has been created, the first function to be called is initialize(). This starts the internals of the module, and in turn calls initializeOverlay() which initializes the overlay. These initialization functions should only be used to start internal variables, like timers and statistic vectors, as there are no guarantees that module creation has been finished, or that any other overlay node (if any) has been created already. For that reason, bootstrapping should be first attempted after joinOverlay() has been called. When the overlay module is about to be destroyed or the simulation finishes, the overlay can use finishOverlay() to finalize itself.
void initialize(int stage);
First function to be called, it initializes the bare bones of the overlay module. Fills in necessary parameters, initializes RPCs, sets watches and statistic vectors. When it's done it calls initializeOverlay().
If the joinOnApplicationRequest parameter is not set, it automatically calls join() with a random key. Else the application should manually call the join() function to start the joining process.
void initializeOverlay(int stage);
To be overriden by the overlay, this is the implementable overlay initialization function.
void join(OverlayKey nodeID);
Begins the bootstrapping. This function needs only to be manually called when joinOnApplicationRequest is true. When finished it calls joinOverlay().
void joinOverlay();
To be overriden by the overlay, to start bootstrapping. An overlay can obtain information about other nodes for bootstrapping through the bootstrapOracle functions getBootstrapNode() and getRandomNode(). When bootstrapping is finished the overlay should call setOverlayReady(true).
void setOverlayReady(bool ready);
The overlay should call this function when it has finished bootstrapping and is in a ready state (or inversely, when it leaves that state).
void finishOverlay();
To be overriden by the overlay, this function is called when the module is about to be destroyed, in order for the overlay to finalize itself.
3.3 Messages
The main way to communicate to other nodes will be through packets. To do that use the sendMessageToUDP, with the needed transport address (IP address plus port number) and message as parameters. To receive UDP messages the overlay needs to override handleUDPMessage. For communication with the application module, the functions sendMessageToApp and handleAppMessage can be used in a similar way.
void sendMessageToUDP(const TransportAddress& dest, cMessage* msg);
Sends the given message to address dest.
void handleUDPMessage(BaseOverlayMessage* msg);
Called when a non-RPC/non-BaseRouteMessage message from UDP arrives. May be overriden by the overlay.
void sendMessageToApp(cMessage *msg);
Sends the given message to the application module (TBI)
void handleAppMessage(cMessage* msg);
Called when a non-RPC/non-CommonAPI message from the application arrives. May be overriden by the overlay.
3.4 Key Based Routing (KBR)
To send a key through the overlay the function sendToKey is called. It uses a generic routing algorithm, using the results from findNode, to search for the corresponding node. The function findNode, the center of the KBR system, must be implemented by the overlay, and returns a list of nodes close to the given overlay key. handleFailedNode is called whenever a node given by findNode could not be reached.
void sendToKey(const OverlayKey& key, BaseOverlayMessage* message, int numSiblings = 1, const std::vector<TransportAddress>& sourceRoute = TransportAddress::UNSPECIFIED_NODES, RoutingType routingType = DEFAULT_ROUTING);
Sends the given message to the overlay key.
sourceRoute determines the route that the message will follow. If not specified, it sends the message using a generic routing algorithm using the node vector given by findNode.
routingType specifies how the message will be routed.
NodeVector* findNode(const OverlayKey& key, int numRedundantNodes, int numSiblings, BaseOverlayMessage* msg = NULL);
Must be overriden by the overlay, it returns the numSiblings closest nodes known to key in the routing topology.
bool isSiblingFor(const NodeHandle& node, const OverlayKey& key, int numSiblings, bool* err);
Must be overriden by the overlay, it determines whether the node parameter is among the numSiblings closest nodes to key. If numSiblings equals 1, then it answers whether the node is the closest to key. Note that this function should be consistent with findNode: if isSiblingFor returns true, an equivalent call to findNode should return the node parameter as part of the vector. The err parameter returns whether an error happened.
bool handleFailedNode(const TransportAddress& failed);
Called whenever a node given by findNode was unreachable. May be overriden by the overlay.
3.5 Remote Procedure Calls
RPCs are remote procedure calls which are used by nodes to ask for information to each other. Two calls can be used to initiate an RPC query: sendRouteRpcCall, which sends an RPC to the given key, sendUdpRpcCall, which sends it to the given transport address, and sendInternalRpcCall, which sends it to the same node but a different tier. A node receiving an RPC manages it through handleRpc, and responds to it using sendRpcResponse(). In turn, the starting overlay node can use handleRpcResponse to manage returning RPC responses. handleRpcTimeout is called whenever an RPC could not be delivered. See Common/BaseRpc.h for a detailed explanation of the parameters.
3.5.1 Sending Remote Procedure Calls
inline uint32_t sendUdpRpcCall(const TransportAddress& dest, BaseCallMessage* msg, cPolymorphic* context = NULL, simtime_t timeout = -1, int retries = 0, int rpcId = -1, RpcListener* rpcListener = NULL);
Sends the RPC message msg through UDP the address dest. Context is a pointer to an arbitrary object which can be used to store additional state information. Timeout is the time to wait until a call is declared as lost, retries is the amount of times to retry a lost call. RpcId is an RPC identifier to differentiate between calls. RpcListener is a listener object that will be notified for responses and timout events.
inline uint32_t sendRouteRpcCall(CompType destComp, const TransportAddress& dest, const OverlayKey& destKey, BaseCallMessage* msg, cPolymorphic* context = NULL, RoutingType routingType = DEFAULT_ROUTING, simtime_t timeout = -1, int retries = 0, int rpcId = -1, RpcListener* rpcListener = NULL);
Sends the RPC message through the overlay to the key destKey.
DestComp specifies the destination tier, and can be OVERLAY_COMP for the overlay, TIER1_COMP for the first application tier, TIER2_COMP and so on. The tier of the calling node can be obtained with getThisCompType().
Context is a pointer to an arbitrary object which can be used to store additional state information. RoutingType determines the routing algorithm. Timeout is the time to wait until a call is declared as lost, retries is the amount of times to retry a lost call. RpcId is an RPC identifier to differentiate between calls. RpcListener is a listener object that will be notified for responses and timout events.
inline uint32_t sendInternalRpcCall(CompType destComp, BaseCallMessage* msg, cPolymorphic* context = NULL, simtime_t timeout = -1, int retries = 0, int rpcId = -1, RpcListener* rpcListener = NULL);
Sends the RPC message to the same node but the tier destComp.
DestComp specifies the destination tier, and can be OVERLAY_COMP for the overlay, TIER1_COMP for the first application tier, TIER2_COMP and so on. The tier of the calling node can be obtained with getThisCompType(). Timeout is the time to wait until a call is declared as lost, retries is the amount of times to retry a lost call. RpcId is an RPC identifier to differentiate between calls. RpcListener is a listener object that will be notified for responses and timout events.
3.5.2 Receiving Remote Procedure Calls
bool handleRpc(BaseCallMessage* msg);
To be overriden by the overlay, it is called whenever an RPC is received. An alternative to using a switch to manage incoming RPCs is given by the macros in Common/RpcMacros.h, and can be used the following way:
RPC_SWITCH_START( msg ) RPC_DELEGATE( Join, rpcJoin ); RPC_DELEGATE( Notify, rpcNotify ); RPC_SWITCH_END( )
In this example, RPC_SWITCH_START inits the switch. RPC_DELEGATE casts the message to a structure with "Call" appended to the end of the first parameter (in these cases, JoinCall and NotifyCall) and sends it to the function in the second parameter (rpcJoin() and rpcNotifiy()). RPC_SWITCH_END ends the switch. RPC_HANDLED can be queried at any moment to see if the RPC has been already handled.
void handleRpcTimeout(BaseCallMessage* msg, const TransportAddress& dest, int rpcId, const OverlayKey& destKey);
To be overriden by the overlay, it is called when an RPC times out.
3.5.3 Replying to Remote Procedure Calls
void sendRpcResponse(BaseCallMessage* call, BaseResponseMessage* response);
Must be called by the overlay to respond to a given RPC.
void handleRpcResponse( BaseResponseMessage* msg, int rpcId, simtime_t rtt );
To be overriden by the overlay, it is called whenever an RPC response is received.
3.5.4 Ping RPC Calls
Ping RPC calls are convenience functions already implemented.
void pingNode(const TransportAddress& dest, simtime_t timeout = -1, int retries = 0, cPolymorphic* context = NULL, const char* caption = "PING", RpcListener* rpcListener = NULL, int rpcId = -1, TransportType transportType = INVALID_TRANSPORT, bool overrideCache = false);
Pings the node dest. When a node replies, the callback function pingResponse is invoked.
Parameters timeout, retries, rpcListener and rpcId are the same as sendRouteRpcCall. OverrideCache determines whether the RTT value of the ping call should be cached.
void pingResponse(PingResponse* response, cPolymorphic* context, int rpcId, simtime_t rtt);
To be overriden by the overlay, it is called when a ping response arrives. Response is the RPC reply message. Context, rpcIdare the same parameters as the calling pingNode. The rtt parameter returns the RTT value.
void pingTimeout(PingCall* call, const TransportAddress& dest, cPolymorphic* context, int rpcId);
To be overriden by the overlay, it is called after a ping RPC times out. Call is the RPC call message. Context, rpcIdare the same parameters as the calling pingNode.
4. Setting up the network: the configuration file
Now that the overlay module is implemented, we still need to set up a network in order to run that overlay. To do that, we need to edit the omnetpp.ini file, or set up a custom configuration file. The configuration file should be located in the Simulation folder.
The first line of the configuration file is the inclusion of the default values. That is done by adding the following line:
include ./default.ini
Each simulation environment class is defined as a run. A configuration file can contain different amounts of runs, each with a different number. We need to set up the network in that run. There are two base networks: SimpleUnderlay and Ipv4Underlay. SimpleUnderlay is a simplified flat network where each node is assigned coordinates, packet latencies are calculated based on the distance of the source and destination node coordinates, and each nodes are directly connected to one another. Ipv4Underlay emulates real-life networks and contain hierarchies of nodes, routers, and backbones. The network type is set with the first-tier parameter network. Network modules can be accessed under the names of SimpleNetwork for SimpleUnderlay and Ipv4Network for Ipv4Underlay. The module in charge of building the network is called underlayConfigurator.
Now, once the network type has been set, now we need to declare the node classes. We can declare each node to be made of the same components and attributes, or create different classes each with its own attributes. Each class is represented by a churnGenerator. For example, we can create a network with one type of node:
[Run 1] network = SimpleNetwork # The following applies to SimpleNetwork *.underlayConfigurator.churnGeneratorTypes = "LifetimeChurn" // one node type # Since we only have one churn generator, the following applies to SimpleNetwork.churnGenerator[0] **.numTiers = 1 // only one application tier **.tier1Type = "MyAppModule" // module name of the application tier **.overlayType = "MyOverlay" // module name of the overlay **.lifetimeMean = 10000 // mean session time in seconds **.targetOverlayTerminalNum=10 // target number of nodes
The following applies for a network with 2 node classes: a server and a client. All other values are the default ones set in default.ini.
[Run 2] *.underlayConfigurator.churnGeneratorTypes = "LifetimeChurn NoChurn" // two churn generators # First churn *.churnGenerator[0].tier1Type = "MyClientModule" // module name of the application tier *.churnGenerator[0].overlayType = "MyOverlay" // module name of the overlay # Second churn *.churnGenerator[1].tier1Type = "MyServerModule" // module name of the application tier *.churnGenerator[1].overlayType = "MyOverlay" // module name of the overlay
5. Compiling and running
In order for OverSim to find your files you need to to make the following changes: Edit makemakefiles in the root folder, and add an include to your directory in ALL_OVERSIM_INCLUDES. Change the “all:” section accordingly by adding your folder to the list.
Now run ./makemake
and make
in the root folder. That should compile your new modules.
In order to run it, you need to setup your simulation run in omnetpp.ini as explained in Section 4. Make sure that you selected default values for all parameters in default.ini, or you'll be prompted for a value when the simulation begins - for each instance of the parameter. To start OverSim, enter the directory Simulations and run :
../bin/OverSim [-f customConfigFile] [-r runNumber]
If you don't select a run number, if the GUI is enabled, you'll be prompted for a run number. If not, all the runs in omnetpp.ini (or the given custom config file) will be run.
Have fun!
Attachments (1)
- SimpleOverlayHost.png (20.4 KB ) - added by 15 years ago.
Download all attachments as: .zip