Netem is a great tool for simulating a WAN connection, with all the expected latency, jitter, packet loss, duplication, and bandwidth limitations. These instructions walk you through setting up a machine that sits between your server and client that emulates the behavior of a WAN connection. I use Fedora 14, so your distro may be a little different, but hopefully this post gets you pointed in the right direction.

The picture below shows the 2 most common configurations for a Netem box:

Netem Box Setup

I’ve always used the first configuration, but it doesn’t really matter.

1. Find a Suitable System

  • Any “reasonable” machine that can run Fedora 14 (I use an old Pentium 4 box since I don’t need to simulate a high-speed link)
  • It must have 2 network interfaces
  • It’s nice to use a smaller box if you want it to be portable

2. Install bridge-utils

Make sure you are root, and run:


to see if bridge-utils is installed. If it isn’t, run:

yum install bridge-utils.

3. Bridge the 2 Network Interfaces

First make sure to clear the network configuration for your interfaces:

ifconfig eth0
ifconfig eth1

Then create the bridge and bring it up:

brctl addbr br0
brctl setfd br0 0
brctl addif br0 eth0
brctl addif br0 eth1
ifconfig br0 up

Note that we disable the forwarding delay (‘setfd’). This makes the bridge pass traffic through it immediately instead of the configured delay time.

One final set I had to perform is disabling kernel-level filtering on the bridge. This is done by writing 0 to the bridge nodes under proc:

for f in /proc/sys/net/bridge-*; do echo 0 > $f; done

Note: If for some reason none of this works for you, check out the Linux Foundation page on network bridging.

4. Configure Netem

Netem is actually used in conjunction with the Traffic Control application, tc. I’m not going to go into detail here, but suffice it to say that tc allows you to do packet shaping and adjust packet scheduling. Check out the tc man page for more information.

tc allows you to specify the “queueing discipline” (or “qdisc” for short) used for sending outbound packets on an interface (the fact that it operates on outbound packets only is important to remember). Basically, a qdisc defines how outbound packets are ordered and sent. To view the current qdisc setup on your box, type:

tc qdisc

The default qdisc is pfifo_fast. We’re going to change this to use a combination of Netem and Token Bucket Filter.

Note: For the following examples, I assume eth0 is connected to the network (client-side), and eth1 is directly connected to the server.

Limiting Bandwidth

The Token Bucket Filter (tbf) is used to limit how much data can exit the network interface per second… perfect for simulating WAN bandwidth limitations. Let’s assume we want to emulate a client-side WAN connection of 768kbps down and 128kbps up. Assuming the server is connected to eth1, the eth1 interface receives inbound traffic from the server and eth0 sends that traffic outbound to the client. Since we know the a qdisc works on outbound traffic only, we need to limit eth0 for our download speed of 768kbps. Conversely, we configure eth1 for our upload speed of 128kbps.

tc qdisc replace dev eth0 root handle 1:0 tbf rate 768kbit burst 2048 latency 100ms

tc qdisc replace dev eth1 root handle 2:0 tbf rate 128kbit burst 2048 latency 100ms

We use the ‘replace’ command to overwrite any qdisc setting that’s there (you can use the ‘del’ command to simple remove qdiscs). We set the qdisc as the ‘root‘ of the tree, and configure the tbf ‘rate‘ accordingly. The ‘burst‘ and ‘latency‘ parameters control the initial number of tokens in the bucket and how long queued packet can hang around before being dropped, respectively.

Adding Latency

We can append qdiscs that will allows us to use different tools to control how our simulated WAN behaves. In this example, I’ll use Netem to artifically add 57ms of latency to the download connection with a random variantion of +/- 13ms:

tc qdisc add dev eth0 parent 1:1 handle 10: netem delay 57ms 13ms

And So Much More…

The Netem page on the Linux Foundation website has many other great examples, so there’s no point in me copying them here. If you’ve made it this far and are still interested, I highly encourage you to check out their page.

Tagged with:  

7 Responses to Netem WAN Emulation: How to Setup a Netem Box

  1. Trent says:

    So just a word from the wise, tried following this tutorial and please take extreme caution.

    When we were working on Step 3 and brought the bridge adapter up it caused a loop killing the entire office network connection.

    When testing and installing if you are following this tutorial I would just suggest that you work on an isolated network


  2. Charlie says:

    Thanx for sharing!
    the line
    “for f in /proc/sys/net/bridge-*; do echo 0 > $f; done”
    doesn’t work on my computer…


    Jameson reply on April 7th, 2014 2:06 pm:

    So for me, i needed to add a /bridge/ in the path

    for f in /proc/sys/net/bridge/bridge-*; do echo 0 > $f; done


  3. Tormod says:

    After 10 minutes of running with both speedlimit with tbf, and latency with netem.

    tc qdisc replace dev eth0 root handle 2:0 tbf rate 256kbit burst 2048 latency 100ms
    tc qdisc add dev eth0 parent 1:1 handle 10: netem delay 400ms 100ms

    It looks like the computer gives up. It starts to rise the ping reply to 3000ms, and suddenly it gives up. No reply at all.

    Then i need to delete those lines and start over. Why? Is there a cache limit somewhere?
    I have the same settings on eth1 as mentioned on eth0 above.

    This is to simulate VSAT connection to vessels. “Ideally” it should have some packet loss, duplication and corrupt packages as well, but i have to find out the problem with everything stopping first.


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>