Netem is a great tool for simulating a WAN connection, with all the expected latency, jitter, packet loss, duplication, and bandwidth limitations. These instructions walk you through setting up a machine that sits between your server and client that emulates the behavior of a WAN connection. I use Fedora 14, so your distro may be a little different, but hopefully this post gets you pointed in the right direction.
The picture below shows the 2 most common configurations for a Netem box:
I’ve always used the first configuration, but it doesn’t really matter.
1. Find a Suitable System
- Any “reasonable” machine that can run Fedora 14 (I use an old Pentium 4 box since I don’t need to simulate a high-speed link)
- It must have 2 network interfaces
- It’s nice to use a smaller box if you want it to be portable
Make sure you are
root, and run:
to see if
bridge-utils is installed. If it isn’t, run:
yum install bridge-utils.
3. Bridge the 2 Network Interfaces
First make sure to clear the network configuration for your interfaces:
ifconfig eth0 0.0.0.0
ifconfig eth1 0.0.0.0
Then create the bridge and bring it up:
brctl addbr br0
brctl setfd br0 0
brctl addif br0 eth0
brctl addif br0 eth1
ifconfig br0 up
Note that we disable the forwarding delay (‘setfd’). This makes the bridge pass traffic through it immediately instead of the configured delay time.
One final set I had to perform is disabling kernel-level filtering on the bridge. This is done by writing
0 to the bridge nodes under proc:
for f in /proc/sys/net/bridge-*; do echo 0 > $f; done
Note: If for some reason none of this works for you, check out the Linux Foundation page on network bridging.
4. Configure Netem
Netem is actually used in conjunction with the Traffic Control application,
tc. I’m not going to go into detail here, but suffice it to say that
tc allows you to do packet shaping and adjust packet scheduling. Check out the
tc man page for more information.
tc allows you to specify the “queueing discipline” (or “qdisc” for short) used for sending outbound packets on an interface (the fact that it operates on outbound packets only is important to remember). Basically, a qdisc defines how outbound packets are ordered and sent. To view the current qdisc setup on your box, type:
The default qdisc is pfifo_fast. We’re going to change this to use a combination of Netem and Token Bucket Filter.
Note: For the following examples, I assume
eth0 is connected to the network (client-side), and
eth1 is directly connected to the server.
The Token Bucket Filter (tbf) is used to limit how much data can exit the network interface per second… perfect for simulating WAN bandwidth limitations. Let’s assume we want to emulate a client-side WAN connection of 768kbps down and 128kbps up. Assuming the server is connected to
eth1 interface receives inbound traffic from the server and
eth0 sends that traffic outbound to the client. Since we know the a qdisc works on outbound traffic only, we need to limit
eth0 for our download speed of 768kbps. Conversely, we configure
eth1 for our upload speed of 128kbps.
tc qdisc replace dev eth0 root handle 1:0 tbf rate 768kbit burst 2048 latency 100ms
tc qdisc replace dev eth1 root handle 2:0 tbf rate 128kbit burst 2048 latency 100ms
We use the ‘replace’ command to overwrite any qdisc setting that’s there (you can use the ‘del’ command to simple remove qdiscs). We set the qdisc as the ‘
root‘ of the tree, and configure the tbf ‘
rate‘ accordingly. The ‘
burst‘ and ‘
latency‘ parameters control the initial number of tokens in the bucket and how long queued packet can hang around before being dropped, respectively.
We can append qdiscs that will allows us to use different tools to control how our simulated WAN behaves. In this example, I’ll use Netem to artifically add 57ms of latency to the download connection with a random variantion of +/- 13ms:
tc qdisc add dev eth0 parent 1:1 handle 10: netem delay 57ms 13ms
And So Much More…
The Netem page on the Linux Foundation website has many other great examples, so there’s no point in me copying them here. If you’ve made it this far and are still interested, I highly encourage you to check out their page.