Wireless LAN Benchmarking with DBS (Distributed benchmarking system)



Why use DBS?

Netperf is a popular tool but it does not provide the change of throughput. DBS can do it.

Installation:

In Linux, it requires  Perl 5 and GNU plot. But this is already associated with red-hat package. Also, it needs XNTP. XNTP is a timeserver that makes it possible to synchronize your local computer-time with an external time-server. Additionally, you can run your own time-server and let other hosts contact you to synchronize your LAN-time. But I guess, with only 2 hosts, xntp is not important. But we should align the hosts' time.
After Download and extracting the sourcecode
  1. go to "src" directory. modify makefile, Line 56, fix a bug, change $(UNAME) to obj ( if Beta1.2.0 version)
  2. make dir
  3. go to ../obj/Linux-2.4.18-14
  4. make
  5. mkdir ../../bin
  6. cp "dbsc dbsd dbs_view" to the new "bin" directory
  7. Edit dbs_view script to change the real location of perl as /usr/bin/perl 
  8. Change the process_flag of "dbs_view" as ON. This will enable throughout calculation

Running:

  1. cd ~/dbs/bin/
  2. compose a command file
  3. runnning dbsd daemon on all hosts. default port is 10710
  4. create the directory to store measurements. such as ./ether/
  5. align the time.
  6. type "dbsc sample1"
  7. after test finished, type " ./dbs_view -f sample1 -sq sr -th r -t 0.3 -title 2" to view sequence number and throughput drawn with GNUPLOT.

Analysis:

The dbsd uses a pair of ports for test control. Command file specify another pair of port for TCP/UDP traffic. If the tcp _trace is set to "OFF" in command file, only a  *.t file will store in the output directory. Throughut is calculated by the subroutine of perl script in dbs_view. So, we need to turn on the "process_flag" if throughput need to be plotted.

"cmd_hostname" is the interface used for commands. Thus, we could use ethernet for exchange commands, and use WLAN for test purpose.

Throughput & Delay....

When DBS is used in wired Ethernet, there will be no duplicated UDP packets becasue CSMA/CD protocol does not retransmit frames. However, IEEE 802.11 does have this mechanism. So, we have to modify the original dbs_view script, especially the calc_throughput sub_routine.

Also, the calculate of delay of UDP traffic is wrong.  What is the delay of  a lost packet?

Write a Sample Command File
 # Sample 1
{
sender {
hostname = 10.0.0.1;
hostname_cmd = 192.168.180.249;
port = 20001;
so_debug = OFF;
tcp_trace = OFF;
send_buff = 65535;
recv_buff = 65535;
mem_align = 2048;
pattern {2048, 2048, 0.0, 0.0}
}
receiver {
hostname = 10.0.0.2;
hostname_cmd = 192.168.180.21;
port = 20001;
so_debug = OFF;
tcp_trace = OFF;
recv_buff = 65535;
send_buff = 65535;
mem_align = 2048;
pattern {2048, 2048, 0.0, 0.0}
}
file = ether/test1;
protocol = TCP;
start_time = 0.0;
connection_mode= BEFORE;
end_time = 30;
send_times = 2048;
}
The file could be srored anywhere.