1) Efficient Transfer of Large Files
We have tested RAMP performance when transferring files while varying file size ([100KB, 10MB] range), path length (1, 2, or 3 hops), and bufferSize (disabled or in the [1KB, 1MB] range). To easily compare performance results, we have limited the bandwidth of each single-hop link to a maximum of 2Mbit/s.
Lower Bound Identification
To better understand the reported results, we identify a lower bound transfer time, i.e., the time needed for file transfer over a traditional TCP/IP fixed network (layer-3 routing), experimentally determined via the iperf command (no notable differences have been observed while varying the hop number). The distance between RAMP performance and this lower bound also indicates the overhead of routing choices at the application layer. The table below summarizes iperf-based results and file_size/bandwidth ratios.
Theoretical and iperf-based perfromance.
The figure below shows the experimental results about file transfer times. When bufferSize is disabled (bufferSize >= file size), as expected, the transfer time approximately doubles in the case of 2-hop paths and triples in the case of 3-hop paths, given that intermediate nodes have to receive the whole RAMP packet (entire file) before sending it to the next node. bufferSize values lower than the file size relevantly reduce the transfer time: for instance, considering the 10MB file, with bufferSize=1MB the transfer time passes from 126.8s/85.6s to 63.8s/54.2s in the case of 3/2-hop path, respectively. Performance results further improve when adopting lower bufferSize
, rapidly becoming very close to iperf-based lower bound
and clearly showing the very little overhead
introduced by the RAMP management at the application layer.
File transfer time (y axis, in s) depending on bufferSize (x axis) for files of different size.
However, the file transfer time increases when exploiting very low bufferSize due to the number of read/write operations (e.g., see 10MB file, 3-hop path, 5KB and 1KB bufferSize values). In addition, for small file sizes the time required to open a new socket becomes not negligible compared with the actual data transfer time; the whole file transfer time shows a linear component depending on the number of hops (see 100KB file, 10KB/5KB/1KB bufferSize values).
2) Dynamic Reconfiguration of Video Streams
We have tested RAMP performance when exploiting a real-time video conferencing service in case of abrupt connectivity
interruption and dynamic video stream migration to a new path (Continuity Manager component activated). Our video
service exploits off-the-shelf VLCMediaPlayer to capture (at sender) and to play (at receiver) the video
stream from a regular webcam (25frame/s, resolution = 320x240). Data are MPEG2-encoded and transmitted as
Real-time Transport Protocol encapsulated in MPEG-Transport Streams.
The figure below shows that RAMP avoids packet dropping by only delaying packets in case of path
disruption (at packet0 reception time). After path requalification, the delayed packets accumulated at the node performing path requalification are
immediately rerouted and rapidly reach the receiver, thus contributing to the reduction of perceivable quality
degradation (inter-packet arrival time rapidly returns to the usual fluctuating pattern). The number of delayed
packets has demonstrated to depend on two factors mainly: average packet interval and time for new path segment
determination. For instance, in the case of the figure, only 7 packets are delayed.
Re-configuration performance when reacting to path disruption.
3) On-the-fly Re-casting of Multimedia Streams
The RAMP middleware is able to intercept and re-cast multimedia streams flowing through intermdiate nodes (SmartSplitter component activated performing re-casting). Our video broadcasting service exploits off-the-shelf VLCMediaPlayer to capture (at sender) and to play (at receiver) the video stream from an RTP/MPEG-TS multimedia stream with MPEG2 video codec at 768kbps MPEG-1 Layer1 audio codec at 64kbps; the overall throughput requested for each stream (audio/video bitrate plus RTP/MPEG-TS/RAMP overhead) is about 1250kbps. To evaluate the performance of our middleware, we consider the following testbed: a multimedia streamer on NodeS provides a remote client (NodeC) with multimedia streams, while an intermediate NodeR behaves both as client and re-caster.
To fully understand and quantitatively evaluate the dynamic behavior of our middleware, we have carefully investigated the case of re-casting de/activation while stream provisioning is in progress, in particular the transitory phases immediately before and after re-casting activation (see figure below). The reported results show NodeS (up) and NodeR (down) outgoing throughput with no receiver (interval 1), with one remote receiver (interval 2), with one remote receiver plus a receiver on NodeR (intervals 3, 4, 5), without re-casting activation (intervals 1, 2, 3, 5), and with re-casting activation (interval 4).
Outgoing throughput of NodeS (up) and NodeR (down) without clients (interval 1), with client on NodeC (interval 2), with client on both NodeC and NodeR (intervals 3, 4, 5), with re-casting (interval 4), and without re-casting.(intervals 1, 2, 3, 5).
At the beginning (interval 1), even if NodeS already offers the stream, there is no active client and therefore no outgoing traffic from NodeS; after few seconds (interval 2), one client requests the stream; then, NodeR also starts calling for the same stream (interval 3). In case of no re-casting (intervals 3 and 5), NodeR asks NodeR for the stream by generating useless stream duplication and ineffective bandwidth usage. In case of re-casting activation (interval 4), the client on NodeR exploits the flow that already traverses the spontaneous network.
By delving into a more detailed view to fully understand transitory behaviors, between intervals 3 and 4 (re-casting activation event), remote client migrates from dispenser on NodeS to re-caster on NodeR because the latter provides the same stream and is closer. To this purpose, remote client asks NodeR for the program and commands NodeS to stop provisioning: stream duplication is avoided and streamer migration is transparent to the final user. Between intervals 4 and 5 (re-casting deactivation event), receivers may suffer from temporary service interruption (points X and Y): our middleware quickly identifies and recovers this situation by autonomously looking for a dispenser and requesting again the interrupted stream; interruption has shown to last for about 2.5s (3.2s) for NodeR (NodeC). It is worth noting that achieved results are based on the pessimistic case that clients perceive stream interruption due to packet receive timeout; vary short interruption, e.g., 100ms or less, can be achieved in case NodeR explictly notifies to its clients it is going to interrupt stream re-casting.
May 02, 2011