1NETEM(8) Linux NETEM(8)
2
3
4
6 netem - Network Emulator
7
9 tc qdisc ... dev DEVICE ] add netem OPTIONS
10
11 OPTIONS := [ LIMIT ] [ DELAY ] [ LOSS ] [ CORRUPT ] [ DUPLICATION ] [
12 REORDERING ] [ RATE ] [ SLOT ]
13
14 LIMIT := limit packets
15
16 DELAY := delay TIME [ JITTER [ CORRELATION ]]]
17 [ distribution { uniform | normal | pareto | paretonormal } ]
18
19 LOSS := loss { random PERCENT [ CORRELATION ] |
20 state p13 [ p31 [ p32 [ p23 [ p14]]]] |
21 gemodel p [ r [ 1-h [ 1-k ]]] } [ ecn ]
22
23 CORRUPT := corrupt PERCENT [ CORRELATION ]]
24
25 DUPLICATION := duplicate PERCENT [ CORRELATION ]]
26
27 REORDERING := reorder PERCENT [ CORRELATION ] [ gap DISTANCE ]
28
29 RATE := rate RATE [ PACKETOVERHEAD [ CELLSIZE [ CELLOVERHEAD ]]]]
30
31 SLOT := slot { MIN_DELAY [ MAX_DELAY ] |
32 distribution { uniform | normal | pareto | paretonormal
33 | FILE } DELAY JITTER }
34 [ packets PACKETS ] [ bytes BYTES ]
35
36
38 The netem queue discipline provides Network Emulation functionality for
39 testing protocols by emulating the properties of real-world networks.
40
41 The queue discipline provides one or more network impairments to pack‐
42 ets such as: delay, loss, duplication, and packet corruption.
43
44
46 limit COUNT
47 Limits the maximum number of packets the qdisc may hold when do‐
48 ing delay.
49
50
51 delay TIME [ JITTER [ CORRELATION ]]]
52 Delays the packets before sending. The optional parameters al‐
53 low introducing a delay variation and a correlation. Delay and
54 jitter values are expressed in milliseconds; Correlation is set
55 by specifying a percent of how much the previous delay will im‐
56 pact the current random value.
57
58
59 distribution TYPE
60 Specifies a pattern for delay distribution.
61
62 uniform
63 Use an equally weighted distribution of packet delays.
64
65 normal Use a Gaussian distribution of delays. Sometimes called
66 a Bell Curve.
67
68 pareto Use a Pareto distribution of packet delays. This is use‐
69 ful to emulate long-tail distributions.
70
71 paretonormal
72 This is a mix of pareto and normal distribution which has
73 properties of both Bell curve and long tail.
74
75
76 loss MODEL
77 Drop packets based on a loss model. MODEL can be one of
78
79 random PERCENT
80 Each packet loss is independent.
81
82 state P13 [ P31 [ P32 [ P23 P14 ]]]
83 Use a 4-state Markov chain to describe packet loss.
84 P13 is the packet loss. Optional parameters extend the
85 model to 2-state P31, 3-state P23, P32 and 4-state P14.
86
87 The Markov chain states are:
88
89 1 good packet reception (no loss).
90
91 2 good reception within a burst.
92
93 3 burst losses.
94
95 4 independent losses.
96
97
98 gemodel PERCENT [ R [ 1-H [ 1-K ]]]
99 Use a Gilbert-Elliot (burst loss) model based on:
100
101 PERCENT
102 probability of starting bad (lossy) state.
103
104 R probability of exiting bad state.
105
106 1-H loss probability in bad state.
107
108 1-K loss probability in good state.
109
110
111 ecn Use Explicit Congestion Notification (ECN) to mark packets in‐
112 stead of dropping them. A loss model has to be used for this to
113 be enabled.
114
115 corrupt PERCENT
116 modifies the contents of the packet at a random position based
117 on PERCENT.
118
119 duplicate PERCENT
120 creates a copy of the packet before queuing.
121
122 reorder PERCENT
123 modifies the order of packet in the queue.
124
125 gap DISTANCE
126 sends some packets immediately. The first packets (DISTANCE -
127 1) are delayed and the next packet is sent immediately.
128
129
130 rate RATE [ PACKETOVERHEAD [ CELLSIZE [ CELLOVERHEAD ]]]
131 Delays packets based on packet size to emulate a fixed link
132 speed. Optional parameters:
133
134 PACKETOVERHEAD
135 Specify a per packet overhead in bytes. Used to simulate
136 additional link layer headers. A negative value can be
137 used to simlate when the Ethernet header is stripped
138 (e.g. -14) or header compression is used.
139
140 CELLSIZE
141 simulate link layer schemes like ATM.
142
143 CELLOVERHEAD
144 specify per cell overhead.
145
146 Rate throttling impacted by several factors including the kernel clock
147 granularity. This will show up in an artificial packet compression
148 (bursts).
149
150
151 slot MIN_DELAY [ MAX_DELAY ]
152 allows emulating slotted networks. Defer delivering accumulated
153 packets to within a slot. Each available slot is configured
154 with a minimum delay to acquire, and an optional maximum delay.
155
156 slot distribution
157 allows configuring based on distribution similar to distribution
158 option for packet delays.
159
160 These slot options can provide a crude approximation of bursty
161 MACs such as DOCSIS, WiFi, and LTE.
162
163 Slot emulation is limited by several factors: the kernel clock
164 granularity, as with a rate, and attempts to deliver many pack‐
165 ets within a slot will be smeared by the timer resolution, and
166 by the underlying native bandwidth also.
167
168 It is possible to combine slotting with a rate, in which case
169 complex behaviors where either the rate, or the slot limits on
170 bytes or packets per slot, govern the actual delivered rate.
171
172
174 Netem is limited by the timer granularity in the kernel. Rate and de‐
175 lay maybe impacted by clock interrupts.
176
177 Mixing forms of reordering may lead to unexpected results. For any
178 method of reordering to work, some delay is necessary. If the delay is
179 less than the inter-packet arrival time then no reordering will be
180 seen. Due to mechanisms like TSQ (TCP Small Queues), for TCP perfor‐
181 mance test results to be realistic netem must be placed on the ingress
182 of the receiver host.
183
184 Combining netem with other qdisc is possible but may not always work
185 because netem use skb control block to set delays.
186
187
189 # tc qdisc add dev eth0 root netem delay 100ms
190 Add fixed amount of delay to all packets going out on device eth0.
191 Each packet will have added delay of 100ms ± 10ms.
192
193 # tc qdisc change dev eth0 root netem delay 100ms 10ms 25%
194 This causes the added delay of 100ms ± 10ms and the next packet de‐
195 lay value will be biased by 25% on the most recent delay. This
196 isn't a true statistical correlation, but an approximation.
197
198 # tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normal
199 This delays packets according to a normal distribution (Bell curve)
200 over a range of 100ms ± 20ms.
201
202 # tc qdisc change dev eth0 root netem loss 0.1%
203 This causes 1/10th of a percent (i.e 1 out of 1000) packets to be
204 randomly dropped.
205
206 An optional correlation may also be added. This causes the random
207 number generator to be less random and can be used to emulate
208 packet burst losses.
209
210 # tc qdisc change dev eth0 root netem duplicate 1%
211 This causes one percent of the packets sent on eth0 to be dupli‐
212 cated.
213
214 # tc qdisc change dev eth0 root netem loss 0.3% 25%
215 This will cause 0.3% of packets to be lost, and each successive
216 probability depends is biased by 25% of the previous one.
217
218 There are two different ways to specify reordering. The gap method
219 uses a fixed sequence and reorders every Nth packet.
220 # tc qdisc change dev eth0 root netem gap 5 delay 10ms
221 This causes every 5th (10th, 15th, …) packet to go to be sent imme‐
222 diately and every other packet to be delayed by 10ms. This is pre‐
223 dictable and useful for base protocol testing like reassembly.
224
225 The reorder form uses a percentage of the packets to get misordered.
226 # tc qdisc change dev eth0 root netem delay 10ms reorder 25% 50%
227 In this example, 25% of packets (with a correlation of 50%) will get
228 sent immediately, others will be delayed by 10ms.
229
230 Packets will also get reordered if jitter is large enough.
231 # tc qdisc change dev eth0 root netem delay 100ms 75ms
232 If the first packet gets a random delay of 100ms (100ms base - 0ms
233 jitter) and the second packet is sent 1ms later and gets a delay of
234 50ms (100ms base - 50ms jitter); the second packet will be sent
235 first. This is because the queue discipline tfifo inside netem,
236 keeps packets in order by time to send.
237
238 If you don't want this behavior then replace the internal queue disci‐
239 pline tfifo with a simple FIFO queue discipline.
240 # tc qdisc add dev eth0 root handle 1: netem delay 10ms 100ms
241 # tc qdisc add dev eth0 parent 1:1 pfifo limit 1000
242
243
244 Example of using rate control and cells size.
245 # tc qdisc add dev eth0 root netem rate 5kbit 20 100 5
246 Delay all outgoing packets on device eth0 with a rate of 5kbit, a
247 per packet overhead of 20 byte, a cellsize of 100 byte and a per
248 celloverhead of 5 bytes.
249
250
251 It is possible to selectively apply impairment using traffic classifi‐
252 cation.
253 # tc qdisc add dev eth0 root handle 1: prio
254 # tc qdisc add dev eth0 parent 1:3 handle 30: tbf rate 20kbit buffer 1600 limit 3000
255 # tc qdisc add dev eth0 parent 30:1 handle 31: netem delay 200ms 10ms distribution normal
256 # tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst 65.172.181.4/32 flowid 1:3
257 This example uses a priority queueing discipline; a TBF is added to
258 do rate control; and a simple netem delay. A filter classifies all
259 packets going to 65.172.181.4 as being priority 3.
260
262 1. Hemminger S. , "Network Emulation with NetEm", Open Source Develop‐
263 ment Lab, April 2005 ⟨http://devresources.linux-
264 foundation.org/shemminger/netem/LCA2005_paper.pdf⟩
265
266
267 2. Salsano S., Ludovici F., Ordine A., "Definition of a general and
268 intuitive loss model for packet networks and its implementation in
269 the Netem module in the Linux kernel", available at
270 ⟨http://netgroup.uniroma2.it/NetemCLG⟩
271
272
274 tc(8)
275
276
278 Netem was written by Stephen Hemminger at Linux foundation and was in‐
279 spired by NISTnet.
280
281 Original manpage was created by Fabio Ludovici <fabio.ludovici at yahoo
282 dot it> and Hagen Paul Pfeifer <hagen@jauu.net>.
283
284
285
286iproute2 25 November 2011 NETEM(8)