1IPERF(1)                         User Manuals                         IPERF(1)
2
3
4

NAME

6       iperf  -  perform  network traffic tests using network sockets. Metrics
7       include throughput and latency.
8

SYNOPSIS

10       iperf -s [options]
11
12       iperf -c server [options]
13
14       iperf -u -s [options]
15
16       iperf -u -c server [options]
17
18

DESCRIPTION

20       iperf 2 is a testing tool which performs network  traffic  measurements
21       using  network  sockets.  The  performance  metrics  supported  include
22       throughput and latency. Iperf can use both TCP and UDP sockets (or pro‐
23       tocols.)  It  supports  unidirectional,  full  duplex (same socket) and
24       bidirectional traffic,  and  supports  multiple,  simultaneous  traffic
25       streams. It supports multicast traffic including source specific multi‐
26       cast (SSM) joins. Its multi-threaded design  allows  for  peak  perfor‐
27       mance. Metrics displayed help to characterize host to host network per‐
28       formance. Note: Setting the enhanced (-e) option provides all available
29       metrics.
30
31       The user must establish both a both a server (to receive traffic) and a
32       client (to generate and send traffic) for a test to occur.  The  client
33       and  server  typically are on different hosts or computers but need not
34       be.
35

GENERAL OPTIONS

37       -b, --bandwidth
38              set the target bandwidth and  optional  standard  deviation  per
39              <mean>,[<stdev>] (See NOTES for suffixes)
40
41       -e, --enhanced
42              Display  enhanced  output in reports otherwise use legacy report
43              (ver 2.0.5) formatting (see notes)
44
45       -f, --format [abkmgBKMG]
46              format to report: adaptive, bits, Bytes,  Kbits,  Mbits,  Gbits,
47              KBytes, MBytes, GBytes (see NOTES for more)
48
49       -h, --help
50              print a help synopsis
51
52       -i, --interval < n[p] | f >
53              sample  or display interval reports every n seconds (default) or
54              n packets (per optional p suffix.) If f is used then the  inter‐
55              val will be each frame or burst. The frame interval reporting is
56              experimental.  Also suggest a compile with  fast-sampling,  i.e.
57              ./configure --enable-fastsampling
58
59       -l, --len n[kmKM]
60              set  read/write  buffer size (TCP) or length (UDP) to n (TCP de‐
61              fault 128K, UDP default 1470)
62
63           --l2checks
64              perform layer 2 length checks on received UDP packets  (requires
65              systems that support packet sockets, e.g. Linux)
66
67       -m, --print_mss
68              print TCP maximum segment size (MTU - TCP/IP header)
69
70           --NUM_REPORT_STRUCTS <count>
71              Override  the  default  shared  memory  size between the traffic
72              thread(s) and reporter thread in order to  mitigate  mutex  lock
73              contentions.  The default value of 5000 should be sufficient for
74              1Gb/s networks. Increase this upon seeing the Warning message of
75              reporter  thread  too  slow.  If the Warning message isn't seen,
76              then increasing this won't have any  significant  effect  (other
77              than to use some additional memory.)
78
79       -o, --output filename
80              output the report or error message to this specified file
81
82           --permit-key [=<value>]
83              Set a key value that must match for the server to accept traffic
84              on a connection. If the option is given without a value  on  the
85              server  a  key  value will be autogenerated and displayed in its
86              initial settings report. The value is required on  clients.  The
87              value  will  also be used as part of the transfer id in reports.
88              The option set on the client but not the server will also  cause
89              the server to reject the client's traffic. TCP only, no UDP sup‐
90              port.  (suggested is to use with -1 or --single-client, -P 1  on
91              the server, and -t on the server which is very short)
92
93       -p, --port n
94              set server port to listen on/connect to to n (default 5001)
95
96           --sum-only
97              set  the output to sum reports only. Useful for -P at large val‐
98              ues
99
100       -u, --udp
101              use UDP rather than TCP
102
103       -w, --window n[kmKM]
104              TCP window size (socket buffer size)
105
106       -z, --realtime
107              Request real-time scheduler, if supported.
108
109       -B, --bind host[:port][%dev]
110              bind to host, ip address or multicast address, optional port  or
111              device (see NOTES)
112
113       -C, --compatibility
114              for use with older versions does not sent extra msgs
115
116       -M, --mss n
117              set TCP maximum segment size (MTU - 40 bytes)
118
119       -N, --nodelay
120              set TCP no delay, disabling Nagle's Algorithm
121
122       -v, --version
123              print version information and quit
124
125       -x, --reportexclude [CDMSV]
126              exclude C(connection) D(data) M(multicast) S(settings) V(server)
127              reports
128
129       -y, --reportstyle C|c
130              if set to C or c report results as CSV (comma separated values)
131
132       -Z, --tcp-congestion
133              Set the default congestion-control algorithm to be used for  new
134              connections. Platforms must support setsockopt's TCP_CONGESTION.
135              (Notes: See sysctl and tcp_allowed_congestion_control for avail‐
136              able options. May require root privileges.)
137

SERVER SPECIFIC OPTIONS

139       -1, --singleclient
140              set the server to process only one client at a time
141
142       -b, --bandwidth n[kmgKMG]
143              set target read rate to n bits/sec. TCP only for the server.
144
145       -s, --server
146              run in server mode
147
148           --histograms[=binwidth[u],bincount,[lowerci],[upperci]]
149              enable  latency  histograms for udp packets (-u), for tcp writes
150              (with --trip-times), or for either udp or tcp with --isochronous
151              clients. The binning can be modified. Bin widths (default 1 mil‐
152              lisecond, append u for microseconds,)  bincount  is  total  bins
153              (default  1000),  ci  is confidence interval between 0-100% (de‐
154              fault lower 5%, upper 95%, 3 stdev 99.7%)
155
156       -B, --bind ip | ip%device
157              bind src ip addr and optional src device for receiving
158
159       -D, --daemon
160              run the server as a daemon. On Windows this will run the  speci‐
161              fied command-line under the IPerfService, installing the service
162              if necessary. Note the service is not configured  to  auto-start
163              or  restart  - if you need a self-starting service you will need
164              to create an init script or use Windows "sc" commands.
165
166       -H, --ssm-host host
167              Set the source host (ip addr) per SSM multicast, i.e. the  S  of
168              the S,G
169
170       -R, --remove
171              remove the IPerfService (Windows only).
172
173       -U, --single_udp
174              run in single threaded UDP mode
175
176       -V, --ipv6_domain
177              Enable  IPv6  reception  by  setting  the  domain  and socket to
178              AF_INET6 (Can receive on both IPv4 and IPv6)
179

CLIENT SPECIFIC OPTIONS

181       -b, --bandwidth n[kmgKMG][,n[kmgKMG]] | n[kmgKMG]pps
182              set target bandwidth to n bits/sec (default  1  Mbit/sec)  or  n
183              packets  per  sec. This may be used with TCP or UDP. Optionally,
184              for variable loads, use format of  mean,standard deviation
185
186       -c, --client host | host%device
187              run in client mode, connecting to host  where the optional  %dev
188              will  SO_BINDTODEVICE  that  output interface (requires root and
189              see NOTES)
190
191           --connect-only[=n]
192              only perform a TCP connect (or 3WHS) without any data  transfer,
193              useful  to  measure  TCP connect() times. Optional value of n is
194              the total number of connects to do (zero is run  forever.)  Note
195              that -i will rate limit the connects where -P will create bursts
196              and -t will end the client and hence end its connect attempts.
197
198           --connect-retries n]
199              number of times to retry a TCP connect at the application level.
200              See  operating  system information on the details of TCP connect
201              related settings.
202
203       -d, --dualtest
204              Do a bidirectional test simultaneous test  using  two  unidirec‐
205              tional sockets
206
207           --fq-rate n[kmgKMG]
208              Set a rate to be used with fair-queueing based socket-level pac‐
209              ing, in bytes or bits per second. Only  available  on  platforms
210              supporting the SO_MAX_PACING_RATE socket option. (Note: Here the
211              suffixes indicate bytes/sec or bits/sec per use of uppercase  or
212              lowercase, respectively)
213
214           --full-duplex
215              run  a  full  duplex test, i.e. traffic in both transmit and re‐
216              ceive directions using the same socket
217
218           --incr-dstip
219              increment the destination ip address  when  using  the  parallel
220              (-P) option
221
222           --ipg n
223              set  the inter-packet gap to n (units of seconds) for packets or
224              within a frame/burst when --isochronous is set
225
226           --isochronous[=fps:mean,stdev]
227              send isochronous traffic with frequency frames  per  second  and
228              load  defined  by mean and standard deviation using a log normal
229              distribution, defaults to 60:20m,0. (Note: Here the suffixes in‐
230              dicate  bytes/sec or bits/sec per use of uppercase or lowercase,
231              respectively. Also the p suffix is supported to  set  the  burst
232              size  in packets, e.g. isochronous=2:25p will send two 25 packet
233              bursts every second, or one 25 packet burst every 0.5 seconds.)
234
235           --local-only[=1|0]
236              Set 1 to limit traffic to the local network  only  (through  the
237              use  of  SO_DONTROUTE) set to zero otherwise with optional over‐
238              ride of compile time default (see configure --default-localonly)
239
240           --near-congestion[=n]
241              Enable TCP write rate limiting per the sampled RTT. The delay is
242              applied  after  the  -l  number of bytes have completed. The op‐
243              tional value is the multiplier to the RTT and defines  the  time
244              delay.  This value defaults to 0.5 if it is not set. Values less
245              than 1 are supported but the value cannot be negative.  This  is
246              an  experimental  feature.  It is not likely stable on live net‐
247              works. Suggested use is over controlled test networks.
248
249
250           --no-connect-sync
251              By default, parallel traffic threads (per  -P  greater  than  1)
252              will  synchronize  after  their  TCP  connects and prior to each
253              sending traffic, i.e. all the threads first complete (or  error)
254              the  TCP 3WHS before any traffic thread will start sending. This
255              option disables that  synchronization  such  that  each  traffic
256              thread  will start sending immediately after completing its suc‐
257              cessful connect.
258
259           --no-udp-fin
260              Don't perform the UDP final  server  to  client  exchange  which
261              means  there  won't  be  a  final server report displayed on the
262              client. All packets per the test will be from the client to  the
263              server  and  no  packets  should be sent in the other direction.
264              It's highly suggested that -t be set on the server if  this  op‐
265              tion  is  being  used.   This  is because there will be only one
266              trigger ending packet sent from client to  server  and  if  it's
267              lost  then the server will continue to run. (Requires ver 2.0.14
268              or better)
269
270       -n, --num n[kmKM]
271              number of bytes to transmit (instead of -t)
272
273           --permit-key-timeout <value>
274              Set the lifetime of the permit key in seconds.  Defaults  to  20
275              seconds if not set. A value of zero will disable the timer.
276
277       -r, --tradeoff
278              Do  a  bidirectional  test individually - client-to-server, fol‐
279              lowed by a reversed test, server-to-client
280
281       -t, --time n
282              time in seconds to listen for new traffic  connections,  receive
283              traffic or transmit traffic (Defaults: transmit is 10 secs while
284              listen and receive are indefinite)
285
286           --trip-times
287              enable the measurement of end to end  write  to  read  latencies
288              (client and server clocks must be synchronized)
289
290           --txdelay-time
291              time  in seconds to hold back or delay after the TCP connect and
292              prior to the socket writes. For UDP it's the delay  between  the
293              traffic thread starting and the first write.
294
295           --txstart-time n.n
296              set  the  txstart-time  to  n.n  using unix or epoch time format
297              (supports microsecond resolution, e.g 1536014418.123456) An  ex‐
298              ample to delay one second using command substitution is iperf -c
299              192.168.1.10 --txstart-time $(expr $(date +%s) + 1).$(date +%N)
300
301       -B, --bind ip | ip:port | ipv6 -V | [ipv6]:port -V
302              bind src ip addr and optional port as the source of traffic (see
303              notes)
304
305       -F, --fileinput name
306              input the data to be transmitted from a file
307
308       -I, --stdin
309              input the data to be transmitted from stdin
310
311       -L, --listenport n
312              port to receive bidirectional tests back on
313
314       -P, --parallel n
315              number of parallel client threads to run
316
317       -R, --reverse
318              reverse  the traffic flow (useful for testing through firewalls,
319              see NOTES)
320
321       -S, --tos
322              set the socket's IP_TOS (byte) field
323
324       -T, --ttl n
325              time-to-live, for multicast (default 1)  -V,  --ipv6_domain  Set
326              the domain to IPv6 (send packets over IPv6)
327
328       -X, --peerdetect
329              run peer version detection prior to traffic.
330
331       -Z, --linux-congestion algo
332              set TCP congestion control algorithm (Linux only)
333

EXAMPLES

335       TCP tests (client)
336
337       iperf -c <host> -e -i 1
338       ------------------------------------------------------------
339       Client connecting to <host>, TCP port 5001 with pid 5149
340       Write buffer size:  128 KByte
341       TCP window size:  340 KByte (default)
342       ------------------------------------------------------------
343       [   3]  local  45.56.85.133 port 49960 connected with 45.33.58.123 port
344       5001 (ct=3.23 ms)
345       [  ID]  Interval         Transfer     Bandwidth        Write/Err   Rtry
346       Cwnd/RTT        NetPwr
347       [   3]  0.00-1.00  sec    126 MBytes  1.05 Gbits/sec  1006/0          0
348       56K/626 us  210636.47
349       [  3] 1.00-2.00 sec   138 MBytes   1.15  Gbits/sec   1100/0         299
350       483K/3884 us  37121.32
351       [   3]  2.00-3.00  sec    137 MBytes  1.15 Gbits/sec  1093/0         24
352       657K/5087 us  28162.31
353       [  3] 3.00-4.00 sec   126 MBytes   1.06  Gbits/sec   1010/0         284
354       294K/2528 us  52366.58
355       [   3]  4.00-5.00  sec    117  MBytes   980 Mbits/sec  935/0        373
356       487K/2025 us  60519.66
357       [  3] 5.00-6.00 sec   144 MBytes   1.20  Gbits/sec   1149/0           2
358       644K/3570 us  42185.36
359       [   3]  6.00-7.00  sec    126 MBytes  1.06 Gbits/sec  1011/0        112
360       582K/5281 us  25092.56
361       [  3] 7.00-8.00 sec   110  MBytes    922  Mbits/sec   879/0          56
362       279K/1957 us  58871.89
363       [   3]  8.00-9.00  sec    127 MBytes  1.06 Gbits/sec  1014/0         46
364       483K/3372 us  39414.89
365       [  3] 9.00-10.00 sec   132 MBytes  1.11  Gbits/sec   1054/0           0
366       654K/3380 us  40872.75
367       [   3]  0.00-10.00 sec  1.25 GBytes  1.07 Gbits/sec  10251/0       1196
368       -1K/3170 us  42382.03
369
370
371       where (per -e,)
372              ct= TCP connect time (or three way handshake time 3WHS)
373              Write/Err Total number of successful socket writes. Total number
374              of non-fatal socket write errors
375              Rtry Total number of TCP retries
376              Cwnd/RTT  (*nix  only) TCP congestion window and round trip time
377              (sampled where NA indicates no value)
378              NetPwr (*nix only) Network power defined as (throughput / RTT)
379
380
381       TCP tests (server)
382
383       iperf -s -e -i 1 -l 8K
384       ------------------------------------------------------------
385       Server listening on TCP port 5001 with pid 13430
386       Read buffer size: 8.00 KByte
387       TCP window size: 85.3 KByte (default)
388       ------------------------------------------------------------
389       [  4] local 45.33.58.123 port 5001  connected  with  45.56.85.133  port
390       49960
391       [     ID]    Interval           Transfer       Bandwidth          Reads
392       Dist(bin=1.0K)
393       [    4]   0.00-1.00   sec     124   MBytes    1.04   Gbits/sec    22249
394       798:2637:2061:767:2165:1563:589:11669
395       [    4]   1.00-2.00   sec     136   MBytes    1.14   Gbits/sec    24780
396       946:3227:2227:790:2427:1888:641:12634
397       [    4]   2.00-3.00   sec     137   MBytes    1.15   Gbits/sec    24484
398       1047:2686:2218:810:2195:1819:728:12981
399       [    4]   3.00-4.00   sec     126   MBytes    1.06   Gbits/sec    20812
400       863:1353:1546:614:1712:1298:547:12879
401       [    4]   4.00-5.00   sec     117   MBytes     984   Mbits/sec    20266
402       769:1886:1828:589:1866:1350:476:11502
403       [    4]   5.00-6.00   sec     143   MBytes    1.20   Gbits/sec    24603
404       1066:1925:2139:822:2237:1827:744:13843
405       [    4]   6.00-7.00   sec     126   MBytes    1.06   Gbits/sec    22635
406       834:2464:2249:724:2269:1646:608:11841
407       [    4]   7.00-8.00   sec     110   MBytes     921   Mbits/sec    21107
408       842:2437:2747:592:2871:1903:496:9219
409       [    4]   8.00-9.00   sec     126   MBytes    1.06   Gbits/sec    22804
410       1038:1784:2639:656:2738:1927:573:11449
411       [    4]   9.00-10.00   sec     133   MBytes    1.11   Gbits/sec   23091
412       1088:1654:2105:710:2333:1928:723:12550
413       [   4]  0.00-10.02   sec    1.25   GBytes    1.07   Gbits/sec    227306
414       9316:22088:21792:7096:22893:17193:6138:120790
415
416       where (per -e,)
417              Reads Total number of socket reads
418              Dist(bin=size)  Eight bin histogram of the socket reads returned
419              byte count. Bin width is set per size. Bins are separated  by  a
420              colon. In the example, the bins are 0-1K, 1K-2K, .., 7K-8K.
421
422
423       TCP tests (server with --trip-times on client) iperf -s -i 1 -w 4M
424       ------------------------------------------------------------
425       Server listening on TCP port 5001
426       TCP window size: 8.00 MByte (WARNING: requested 4.00 MByte)
427       ------------------------------------------------------------
428       [   4] local 192.168.1.4%eth0 port 5001 connected with 192.168.1.7 port
429       44798 (trip-times) (MSS=1448) (peer 2.0.14-alpha)
430       [  ID]   Interval          Transfer      Bandwidth      Burst   Latency
431       avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
432       [      4]     0.00-1.00    sec     19.0    MBytes      159    Mbits/sec
433       52.314/10.238/117.155/19.779  ms   (151/131717)   1.05   MByte   380.19
434       781=306:253:129:48:18:15:8:4
435       [      4]     1.00-2.00    sec     20.0    MBytes      168    Mbits/sec
436       53.863/21.264/79.252/12.277   ms   (160/131080)   1.08   MByte   389.38
437       771=294:236:126:60:18:24:10:3
438       [      4]     2.00-3.00    sec     18.2    MBytes      153    Mbits/sec
439       58.718/22.000/137.944/20.397  ms   (146/130964)   1.06   MByte   325.64
440       732=299:231:98:52:18:19:10:5
441       [    4]   3.00-4.00   sec    19.7   MBytes     165  Mbits/sec   50.448/
442       8.921/82.728/14.627    ms    (158/130588)      997     KByte     409.00
443       780=300:255:121:58:15:18:7:6
444       [      4]     4.00-5.00    sec     18.8    MBytes      158    Mbits/sec
445       53.826/11.169/115.316/15.541  ms   (150/131420)   1.02   MByte   366.24
446       761=302:226:134:52:22:17:7:1
447       [      4]     5.00-6.00    sec     19.5    MBytes      164    Mbits/sec
448       50.943/11.922/76.134/14.053   ms   (156/131276)   1.03   MByte   402.00
449       759=273:246:149:45:16:18:4:8
450       [      4]     6.00-7.00    sec     18.5    MBytes      155    Mbits/sec
451       57.643/10.039/127.850/18.950  ms   (148/130926)   1.05   MByte   336.16
452       710=262:228:133:37:16:20:8:6
453       [      4]     7.00-8.00    sec     19.6    MBytes      165    Mbits/sec
454       52.498/12.900/77.045/12.979   ms   (157/131003)   1.00   MByte   391.78
455       742=288:200:135:68:16:23:4:8
456       [    4]   8.00-9.00   sec    18.0   MBytes     151  Mbits/sec   58.370/
457       8.026/150.243/21.445    ms    (144/131255)    1.06     MByte     323.81
458       716=268:241:108:51:20:17:8:3
459       [      4]    9.00-10.00    sec     18.4    MBytes      154    Mbits/sec
460       56.112/12.419/79.790/13.668   ms   (147/131194)   1.05   MByte   343.70
461       822=330:303:120:26:16:14:9:4
462       [     4]    10.00-10.06    sec     1.03    MBytes      146    Mbits/sec
463       69.880/45.175/78.754/10.823   ms   (9/119632)   1.74    MByte    260.40
464       62=26:30:5:1:0:0:0:0
465       [    4]   0.00-10.06   sec     191   MBytes    159  Mbits/sec   54.183/
466       8.026/150.243/16.781    ms    (1526/131072)    1.03    MByte     366.98
467       7636=2948:2449:1258:498:175:185:75:48
468
469       where (per -e,)
470              Burst Latency One way TCP write() to read() latency in mean/min‐
471              imum/maximum/standard  deviation  format  (Note:  requires   the
472              client's and server's system clocks to be synchronized to a com‐
473              mon reference, e.g. using precision time  protocol  PTP.  A  GPS
474              disciplined OCXO is a recommended reference.)
475              cnt  Number  of completed bursts received and used for the burst
476              latency calculations
477              size Average burst size in bytes (computed average and  estimate
478              only)
479              inP  inP,  short for in progress, is the average number of bytes
480              in progress or in flight. This is  taken  from  the  application
481              level  write to read perspective. Note this is a mean value. The
482              parenthesis value is the standard deviation from the mean.  (Re‐
483              quires --trip-times on client. See Little's law in NOTES.)
484              NetPwr Network power defined as (throughput / one way latency)
485
486
487       UDP tests (client)
488
489       iperf -c <host> -e -i 1 -u -b 10m
490       ------------------------------------------------------------
491       Client connecting to <host>, UDP port 5001 with pid 5169
492       Sending 1470 byte datagrams, IPG target: 1176.00 us (kalman adjust)
493       UDP buffer size:  208 KByte (default)
494       ------------------------------------------------------------
495       [   3]  local  45.56.85.133 port 32943 connected with 45.33.58.123 port
496       5001
497       [ ID] Interval        Transfer     Bandwidth      Write/Err  PPS
498       [  3] 0.00-1.00 sec  1.19 MBytes  10.0 Mbits/sec  852/0      851 pps
499       [  3] 1.00-2.00 sec  1.19 MBytes  10.0 Mbits/sec  850/0      850 pps
500       [  3] 2.00-3.00 sec  1.19 MBytes  10.0 Mbits/sec  850/0      850 pps
501       [  3] 3.00-4.00 sec  1.19 MBytes  10.0 Mbits/sec  851/0      850 pps
502       [  3] 4.00-5.00 sec  1.19 MBytes  10.0 Mbits/sec  850/0      850 pps
503       [  3] 5.00-6.00 sec  1.19 MBytes  10.0 Mbits/sec  850/0      850 pps
504       [  3] 6.00-7.00 sec  1.19 MBytes  10.0 Mbits/sec  851/0      850 pps
505       [  3] 7.00-8.00 sec  1.19 MBytes  10.0 Mbits/sec  850/0      850 pps
506       [  3] 8.00-9.00 sec  1.19 MBytes  10.0 Mbits/sec  851/0      850 pps
507       [  3] 0.00-10.00 sec  11.9 MBytes  10.0 Mbits/sec  8504/0      850 pps
508       [  3] Sent 8504 datagrams
509       [  3] Server Report:
510       [  3] 0.00-10.00 sec  11.9 MBytes  10.0 Mbits/sec   0.047 ms    0/ 8504
511       (0%)  0.537/ 0.392/23.657/ 0.497 ms  850 pps  2329.37
512
513       where (per -e,)
514              Write/Err Total number of successful socket writes. Total number
515              of non-fatal socket write errors
516              PPS Transmit packet rate in packets per second
517
518
519       UDP tests (server) iperf -s -i 1 -w 4M -u
520       ------------------------------------------------------------
521       Server listening on UDP port 5001
522       Receiving 1470 byte datagrams
523       UDP buffer size: 8.00 MByte (WARNING: requested 4.00 MByte)
524       ------------------------------------------------------------
525       [  3] local 192.168.1.4 port 5001 connected with 192.168.1.1 port 60027
526       (WARN:  winsize=8.00  MByte  req=4.00  MByte)  (trip-times) (0.0) (peer
527       2.0.14-alpha)
528       [ ID] Interval        Transfer     Bandwidth        Jitter   Lost/Total
529       Latency avg/min/max/stdev PPS  inP NetPwr
530       [  3] 0.00-1.00 sec  44.5 MBytes   373 Mbits/sec   0.071 ms 52198/83938
531       (62%) 75.185/ 2.367/85.189/14.430 ms 31854 pps 3.64 MByte 620.58
532       [   3]  1.00-2.00  sec   44.8  MBytes    376   Mbits/sec     0.015   ms
533       59549/143701  (41%) 79.609/75.603/85.757/ 1.454 ms 31954 pps 3.56 MByte
534       590.04
535       [   3]  2.00-3.00  sec   44.5  MBytes    373   Mbits/sec     0.017   ms
536       59494/202975  (29%) 80.006/75.951/88.198/ 1.638 ms 31733 pps 3.56 MByte
537       583.07
538       [   3]  3.00-4.00  sec   44.5  MBytes    373   Mbits/sec     0.019   ms
539       59586/262562  (23%) 79.939/75.667/83.857/ 1.145 ms 31767 pps 3.56 MByte
540       583.57
541       [   3]  4.00-5.00  sec   44.5  MBytes    373   Mbits/sec     0.081   ms
542       59612/322196  (19%) 79.882/75.400/86.618/ 1.666 ms 31755 pps 3.55 MByte
543       584.40
544       [   3]  5.00-6.00  sec   44.7  MBytes    375   Mbits/sec     0.064   ms
545       59571/381918  (16%) 79.767/75.571/85.339/ 1.556 ms 31879 pps 3.56 MByte
546       588.02
547       [   3]  6.00-7.00  sec   44.6  MBytes    374   Mbits/sec     0.041   ms
548       58990/440820  (13%) 79.722/75.662/85.938/ 1.087 ms 31820 pps 3.58 MByte
549       586.73
550       [   3]  7.00-8.00  sec   44.7  MBytes    375   Mbits/sec     0.027   ms
551       59679/500548  (12%) 79.745/75.704/84.731/ 1.094 ms 31869 pps 3.55 MByte
552       587.46
553       [   3]  8.00-9.00  sec   44.3  MBytes    371   Mbits/sec     0.078   ms
554       59230/559499  (11%) 80.346/75.514/94.293/ 2.858 ms 31590 pps 3.58 MByte
555       577.97
556       [   3]  9.00-10.00  sec   44.4  MBytes    373  Mbits/sec     0.073   ms
557       58782/618394 (9.5%) 79.125/75.511/93.638/ 1.643 ms 31702 pps 3.55 MByte
558       588.99
559       [   3]  10.00-10.08  sec   3.53  MBytes    367  Mbits/sec    0.129   ms
560       6026/595236  (1%)  94.967/80.709/99.685/  3.560 ms 31107 pps 3.58 MByte
561       483.12
562       [   3]  0.00-10.08  sec    449  MBytes    374  Mbits/sec     0.129   ms
563       592717/913046  (65%)  79.453/  2.367/99.685/  5.200 ms 31776 pps (null)
564       587.91
565
566
567       where (per -e,)
568              Latency End to end latency in mean/minimum/maximum/standard  de‐
569              viation  format (Note: requires the client's and server's system
570              clocks to be synchronized to a common reference, e.g. using pre‐
571              cision  time  protocol  PTP.  A GPS disciplined OCXO is a recom‐
572              mended reference.)
573              PPS Received packet rate in packets per second
574              inP inP, short for in progress, is the average number  of  bytes
575              in  progress  or  in  flight.  This is taken from an application
576              write to read perspective. (Requires --trip-times on client. See
577              Little's law in NOTES.)
578              NetPwr Network power defined as (throughput / latency)
579
580
581       Isochronous UDP tests (client)
582
583       iperf -c 192.168.100.33 -u -e -i 1 --isochronous=60:100m,10m --realtime
584       ------------------------------------------------------------
585       Client connecting to 192.168.100.33, UDP port 5001 with pid 14971
586       UDP  isochronous:  60  frames/sec mean= 100 Mbit/s, stddev=10.0 Mbit/s,
587       Period/IPG=16.67/0.005 ms
588       UDP buffer size:  208 KByte (default)
589       ------------------------------------------------------------
590       [  3] local 192.168.100.76 port  42928  connected  with  192.168.100.33
591       port 5001
592       [   ID]  Interval         Transfer      Bandwidth       Write/Err   PPS
593       frames:tx/missed/slips
594       [  3] 0.00-1.00 sec  12.0 MBytes   101 Mbits/sec  8615/0      8493  pps
595       62/0/0
596       [   3]  1.00-2.00 sec  12.0 MBytes   100 Mbits/sec  8556/0     8557 pps
597       60/0/0
598       [  3] 2.00-3.00 sec  12.0 MBytes   101 Mbits/sec  8586/0      8586  pps
599       60/0/0
600       [   3]  3.00-4.00 sec  12.1 MBytes   102 Mbits/sec  8687/0     8687 pps
601       60/0/0
602       [  3] 4.00-5.00 sec  11.8 MBytes  99.2 Mbits/sec  8468/0      8468  pps
603       60/0/0
604       [   3]  5.00-6.00 sec  11.9 MBytes  99.8 Mbits/sec  8519/0     8520 pps
605       60/0/0
606       [  3] 6.00-7.00 sec  12.1 MBytes   102 Mbits/sec  8694/0      8694  pps
607       60/0/0
608       [   3]  7.00-8.00 sec  12.1 MBytes   102 Mbits/sec  8692/0     8692 pps
609       60/0/0
610       [  3] 8.00-9.00 sec  11.9 MBytes   100 Mbits/sec  8537/0      8537  pps
611       60/0/0
612       [   3] 9.00-10.00 sec  11.8 MBytes  99.0 Mbits/sec  8450/0     8450 pps
613       60/0/0
614       [  3] 0.00-10.01 sec   120 MBytes   100 Mbits/sec  85867/0     8574 pps
615       602/0/0
616       [  3] Sent 85867 datagrams
617       [  3] Server Report:
618       [   3] 0.00-9.98 sec   120 MBytes   101 Mbits/sec   0.009 ms  196/85867
619       (0.23%)  0.665/ 0.083/ 1.318/ 0.174 ms 8605 pps  18903.85
620
621       where (per -e,)
622              frames:tx/missed/slips Total number  of  isochronous  frames  or
623              bursts.  Total  number  of  frame  ids not sent. Total number of
624              frame slips
625
626
627       Isochronous UDP tests (server)
628
629       iperf -s -e -u --udp-histogram=100u,2000 --realtime
630       ------------------------------------------------------------
631       Server listening on UDP port 5001 with pid 5175
632       Receiving 1470 byte datagrams
633       UDP buffer size:  208 KByte (default)
634       ------------------------------------------------------------
635       [  3] local 192.168.100.33 port 5001 connected with 192.168.100.76 port
636       42928 isoch (peer 2.0.13-alpha)
637       [ ID] Interval        Transfer     Bandwidth        Jitter   Lost/Total
638       Latency avg/min/max/stdev PPS  NetPwr  Frames/Lost
639       [  3] 0.00-9.98 sec   120 MBytes   101 Mbits/sec   0.010 ms   196/85867
640       (0.23%)  0.665/ 0.083/ 1.318/ 0.284 ms 8585 pps  18903.85  601/1
641       [             3]           0.00-9.98           sec           T8(f)-PDF:
642       bin(w=100us):cnt(85671)=1:2,2:844,3:10034,4:8493,5:8967,6:8733,7:8823,8:9023,9:8901,10:8816,11:7730,12:4563,13:741,14:1
643       (5.00/95.00%=3/12,Outliers=0,obl/obu=0/0)
644       [             3]           0.00-9.98           sec           F8(f)-PDF:
645       bin(w=100us):cnt(598)=15:2,16:1,17:27,18:68,19:125,20:136,21:103,22:83,23:22,24:23,25:5,26:3
646       (5.00/95.00%=17/24,Outliers=0,obl/obu=0/0)
647
648
649       where, Frames/lost  Total  number of frames (or bursts) received. Total
650              number of bursts lost or error-ed
651              T8-PDF(f) Latency histogram for packets
652              F8-PDF(f) Latency histogram for frames
653
654
655

ENVIRONMENT

657       Note:  The environment variable option settings haven't been maintained
658              well.  See the source code if these are of interest.
659

NOTES

661       Numeric  options:  Some  numeric  options support format characters per
662       '<value>c' (e.g. 10M) where the c format  characters  are  k,m,g,K,M,G.
663       Lowercase format characters are 10^3 based and uppercase are 2^n based,
664       e.g. 1k = 1000, 1K = 1024, 1m = 1,000,000 and 1M = 1,048,576
665
666       Rate limiting: The -b option supports read and write rate  limiting  at
667       the application level.  The -b option on the client also supports vari‐
668       able offered loads through the <mean>,<standard deviation> format, e.g.
669       -b  100m,10m.  The  distribution  used  is  log normal. Similar for the
670       isochronous option. The -b on the server rate limits the reads.  Socket
671       based  pacing  is  also supported using the --fq-rate long option. This
672       will work with the --reverse and --full-duplex options as well.
673
674       Synchronized  clocks:  The  --trip-times  option  indicates  that   the
675       client's  and  server's  clocks are synchronized to a common reference.
676       Network Time Protocol (NTP) or Precision Time Protocol (PTP)  are  com‐
677       monly  used for this. The reference clock(s) error and the synchroniza‐
678       tion protocols will affect the accuracy of any end to end latency  mea‐
679       surements.
680
681       Binding  is done at the logical level (ip address or layer 3) using the
682       -B option and at the device (or layer 2) level using  the  percent  (%)
683       separator for both the client and the server. On the client, the -B op‐
684       tion affects the bind(2) system call, and will set the  source  ip  ad‐
685       dress  and the source port, e.g. iperf -c <host> -B 192.168.100.2:6002.
686       This controls the packet's source values but not routing.  These can be
687       confusing  in  that a route or device lookup may not be that of the de‐
688       vice with the configured source IP.  So, for example, if the IP address
689       of eth0 is used for -B and the routing table for the destination IP ad‐
690       dress resolves the output interface to be eth1, then the host will send
691       the packet out device eth1 while using the source IP address of eth0 in
692       the packet.  To affect the physical output interface (e.g.  dual  homed
693       systems) either use -c <host>%<dev> (requires root) which bypasses this
694       host route table lookup, or configure policy routing per each -B source
695       address  and  set  the  output  interface  appropriately  in the policy
696       routes. On the server or receive, only packets destined to  -B  IP  ad‐
697       dress  will  be  received. It's also useful for multicast. For example,
698       iperf -s -B 224.0.0.1%eth0 will only accept ip multicast  packets  with
699       dest  ip 224.0.0.1 that are received on the eth0 interface, while iperf
700       -s -B 224.0.0.1 will receive those packets on any  interface,  Finally,
701       the   device   specifier   is  required  for  v6  link-local,  e.g.  -c
702       [v6addr]%<dev> -V, to select the output interface.
703
704       Reverse, full-duplex, dualtest (-d) and tradeoff  (-r):  The  --reverse
705       (-R)  and  --full-duplex  options can be confusing when compared to the
706       older options of --dualtest (-d) and --tradeoff (-r). The newer options
707       of  --reverse and --full-duplex only open one socket and read and write
708       to the same socket descriptor, i.e. use the socket in full duplex mode.
709       The  older  -d and -r open second sockets in the opposite direction and
710       do not use a socket in full duplex mode. Note that full duplex  applies
711       to the socket and not to the network devices and that full duplex sock‐
712       ets are supported by the operating systems regardless if an  underlying
713       network  supports  full  duplex  transmission and reception.  It's sug‐
714       gested to use --reverse if you want to test through a NAT firewall  (or
715       -R  on non-windows systems). This applies role reversal of the test af‐
716       ter opening the full duplex socket.  (Note: Firewall  piercing  may  be
717       required to use -d and -r if a NAT gateway is in the path.)
718
719       Also,  the  --reverse -b <rate> setting behaves differently for TCP and
720       UDP. For TCP it will rate limit the read side, i.e.  the  iperf  client
721       (role reversed to act as a server) reading from the full duplex socket.
722       This will in turn flow control the reverse  traffic  per  standard  TCP
723       congestion control. The --reverse -b <rate> will be applied on transmit
724       (i.e. the server role reversed to act as a client) for UDP since  there
725       is  no flow control with UDP. There is no option to directly rate limit
726       the writes with TCP testing when using --reverse.
727
728       TCP Connect times: The TCP connect time (or three way handshake) can be
729       seen  on  the iperf client when the -e (--enhanced) option is set. Look
730       for the ct=<value>  in  the  connected  message,  e.g.in  '[  3]  local
731       192.168.1.4  port  48736  connected with 192.168.1.1 port 5001 (ct=1.84
732       ms)' shows the 3WHS took 1.84 milliseconds.
733
734       Packet per second (pps) calculation The packets per second  calculation
735       is  done  as  a derivative, i.e. number of packets divided by time. The
736       time is taken from the previous last packet to the current last packet.
737       It is not the sample interval time. The last packet can land at differ‐
738       ent times within an interval.  This means that pps  does  not  have  to
739       match rx bytes divided by the sample interval.  Also, with --trip-times
740       set, the packet time on receive is set by the sender's  write  time  so
741       pps indicates the end to end pps with --trip-times. The RX pps calcula‐
742       tion is receive side only when -e is set and --trip-times is not set.
743
744       Little's Law in queuing theory is a theorem that determines the average
745       number of items (L) in a stationary queuing system based on the average
746       waiting time (W) of an item within a system and the average  number  of
747       items arriving at the system per unit of time (lambda). Mathematically,
748       it's L = lambda * W. As used here, the units  are  bytes.  The  arrival
749       rate is taken from the writes.
750
751       Network  power: The network power (NetPwr) metric is experimental. It's
752       a convenience function defined as throughput/delay.  For TCP transmits,
753       the delay is the sampled RTT times.  For TCP receives, the delay is the
754       write to read latency.  For UDP  the  delay  is  the  end/end  latency.
755       Don't  confuse  this  with  the  physics definition of power (delta en‐
756       ergy/delta time) but more of a measure of a desirable property  divided
757       by  an  undesirable  property. Also note, one must use -i interval with
758       TCP to get this as that's what sets the RTT sampling rate.  The  metric
759       is scaled to assist with human readability.
760
761       Multicast:  Iperf 2 supports multicast with a couple of caveats. First,
762       multicast streams cannot take advantage of the -P  option.  The  server
763       will serialize multicast streams. Also, it's highly encouraged to use a
764       -t on a server that will be used for multicast clients. That is because
765       the  single  end  of  traffic packet sent from client to server may get
766       lost and there are no redundant end of traffic packets.  Setting -t  on
767       the  server will kill the server thread in the event this packet is in‐
768       deed lost.
769
770       Fast Sampling: Use ./configure --enable-fastsampling and  then  compile
771       from  source  to  enable four digit (e.g. 1.0000) precision in reports'
772       timestamps. Useful for sub-millisecond sampling.
773

DIAGNOSTICS

775       Use ./configure --enable-thread-debug and then compile from  source  to
776       enable both asserts and advanced debugging of the tool itself.
777

BUGS

779       See https://sourceforge.net/p/iperf2/tickets/
780

AUTHORS

782       Iperf2,  based  from  iperf  (originally written by Mark Gates and Alex
783       Warshavsky), has a goal of maintenance with some  feature  enhancement.
784       Other contributions from Ajay Tirumala, Jim Ferguson, Jon Dugan <jdugan
785       at x1024 dot net>, Feng Qin, Kevin Gibbs, John Estabrook  <jestabro  at
786       ncsa.uiuc.edu>,  Andrew  Gallatin <gallatin at gmail.com>, Stephen Hem‐
787       minger <shemminger at linux-foundation.org>, Tim Auckland <tim.auckland
788       at gmail.com>, Robert J. McMahon <rjmcmahon at rjmcmahon.com>
789

SEE ALSO

791       accept(2),bind(2),close(2),connect(2),fcntl(2),getpeername(2),getsock‐
792       name(2),getsockopt(2),listen(2),read(2),recv(2),select(2),send(2),set‐
793       sockopt(2),shutdown(2),write(2),ip(7),socket(7),tcp(7),udp(7)
794
795       Source code at http://sourceforge.net/projects/iperf2/
796
797       "Unix  Network  Programming,  Volume 1: The Sockets Networking API (3rd
798       Edition) 3rd Edition" by W. Richard Stevens (Author), Bill Fenner  (Au‐
799       thor), Andrew M. Rudoff (Author)
800
801
802
803NLANR/DAST                       January 2021                         IPERF(1)
Impressum