1IRTT(1)                           IRTT Manual                          IRTT(1)
2
3
4

NAME

6       irtt - Isochronous Round-Trip Time
7

SYNOPSIS

9       irtt command [args]
10
11       irtt help command
12

DESCRIPTION

14       IRTT  measures  round-trip time and other latency related metrics using
15       UDP packets sent on a fixed period, and produces  both  text  and  JSON
16       output.
17

COMMANDS

19       client runs the client
20
21       server runs the server
22
23       bench  runs HMAC and fill benchmarks
24
25       clock  runs wall vs monotonic clock test
26
27       sleep  runs sleep accuracy test
28
29       version
30              shows the version
31

EXAMPLES

33       After installing IRTT, start a server:
34
35              $ irtt server
36              IRTT server starting...
37              [ListenerStart] starting IPv6 listener on [::]:2112
38              [ListenerStart] starting IPv4 listener on 0.0.0.0:2112
39
40       While  that’s  running,  run  a client.  If no options are supplied, it
41       will send a request once per second, like ping.  Here we simulate a one
42       minute G.711 VoIP conversation by using an interval of 20ms and random‐
43       ly filled payloads of 172 bytes:
44
45              $ irtt client -i 20ms -l 172 -d 1m --fill=rand --sfill=rand -q 192.168.100.10
46              [Connecting] connecting to 192.168.100.10
47              [Connected] connected to 192.168.100.10:2112
48
49                                       Min     Mean   Median      Max  Stddev
50                                       ---     ----   ------      ---  ------
51                              RTT  11.93ms  20.88ms   19.2ms  80.49ms  7.02ms
52                       send delay   4.99ms  12.21ms  10.83ms  50.45ms  5.73ms
53                    receive delay   6.38ms   8.66ms   7.86ms  69.11ms  2.89ms
54
55                    IPDV (jitter)    782ns   4.53ms   3.39ms  64.66ms   4.2ms
56                        send IPDV    256ns   3.99ms   2.98ms  35.28ms  3.69ms
57                     receive IPDV    896ns   1.78ms    966µs  62.28ms  2.86ms
58
59                   send call time   56.5µs   82.8µs           18.99ms   348µs
60                      timer error       0s   21.7µs           19.05ms   356µs
61                server proc. time   23.9µs   26.9µs             141µs  11.2µs
62
63                              duration: 1m0s (wait 241.5ms)
64                 packets sent/received: 2996/2979 (0.57% loss)
65               server packets received: 2980/2996 (0.53%/0.03% loss up/down)
66                   bytes sent/received: 515312/512388
67                     send/receive rate: 68.7 Kbps / 68.4 Kbps
68                         packet length: 172 bytes
69                           timer stats: 4/3000 (0.13%) missed, 0.11% error
70
71       In the results above, the client and server are located at two  differ‐
72       ent  sites, around 50km from one another, each of which connects to the
73       Internet via point-to-point WiFi.  The client is 3km NLOS through trees
74       located near its transmitter, which is likely the reason for the higher
75       upstream packet loss, mean send delay and IPDV.
76

BUGS

78       • Windows is unable to set DSCP values for IPv6.
79
80       • Windows is unable to set the source IP address, so  --set-src-ip  may
81         not be used on the server.
82

LIMITATIONS

84              “It is the limitations of software that give it life.”
85
86                     -Me, justifying my limitations
87
88   Isochronous (fixed period) send schedule
89       Currently,  IRTT  only  sends  packets on a fixed period, foregoing the
90       ability to simulate arbitrary traffic.  Accepting this  limitation  of‐
91       fers some benefits:
92
93       • It’s easy to implement
94
95       • It’s  easy  to  calculate  how many packets and how much data will be
96         sent in a given time
97
98       • It simplifies timer error compensation
99
100       Also, isochronous packets are commonly seen in  VoIP,  games  and  some
101       streaming  media,  so  it already simulates an array of common types of
102       traffic.
103
104   Fixed packet lengths for a given test
105       Packet lengths are fixed for the duration of the test.  While this  may
106       not  be  an accurate simulation of some types of traffic, it means that
107       IPDV measurements are accurate, where they wouldn’t  be  in  any  other
108       case.
109
110   Stateful protocol
111       There  are  numerous  benefits to stateless protocols, particularly for
112       developers and data centers, including simplified server design,  hori‐
113       zontal  scalabity, and easily implemented zero-downtime restarts.  How‐
114       ever, in this case, a stateful protocol provides important benefits  to
115       the user, including:
116
117       • Smaller  packet  sizes (a design goal) as context does not need to be
118         included in every request
119
120       • More accurate measurement of upstream vs downstream packet loss (this
121         gets  worse  in a stateless protocol as RTT approaches the test dura‐
122         tion, complicating interplanetary tests!)
123
124       • More accurate rate and test duration limiting on the server
125
126   In-memory results storage
127       Results for each round-trip are stored in memory as the test  is  being
128       run.   Each  result takes 72 bytes in memory (8 64-bit timestamps and a
129       64-bit server received packet window), so this limits the effective du‐
130       ration  of the test, especially at very small send intervals.  However,
131       the advantages are:
132
133       • It’s easier to perform statistical analysis (like calculation of  the
134         median) on fixed arrays than on running data values
135
136       • We  don’t  need  to  either  send client timestamps to the server, or
137         maintain a local running window of sent packet info, because  they’re
138         all in memory, no matter when server replies come back
139
140       • Not  accessing the disk during the test to write test output prevents
141         inadvertently affecting the results
142
143       • It simplifies the API
144
145       As a consequence of storing results in memory, packet sequence  numbers
146       are  fixed at 32-bits.  If all 2^32 sequence numbers were used, the re‐
147       sults would require over 300 Gb of virtual memory to record  while  the
148       test is running.  That is why 64-bit sequence numbers are currently un‐
149       necessary.
150
151   64-bit received window
152       In order to determine per-packet differentiation between  upstream  and
153       downstream  loss,  a 64-bit “received window” may be returned with each
154       packet that contains the receipt status of  the  previous  64  packets.
155       This  can  be  enabled  using --stats=window/both with the irtt client.
156       Its limited width and simple bitmap format lead to some caveats:
157
158       • Per-packet differentiation is  not  available  (for  any  intervening
159         packets)  if  greater  than 64 packets are lost in succession.  These
160         packets will be marked with the generic Lost.
161
162       • While any packet marked LostDown is guaranteed to be marked properly,
163         there  is  no  confirmation of receipt of the receive window from the
164         client to the server, so packets may sometimes be erroneously  marked
165         LostUp,  for example, if they arrive late to the server and slide out
166         of the received window before they can be confirmed to the client, or
167         if  the  received  window  is  lost  on its way to the client and not
168         amended by a later packet’s received window.
169
170       There are many ways that this simple approach could be  improved,  such
171       as by:
172
173       • Allowing a wider window
174
175       • Encoding  receipt  seqnos  in a more intelligent way to allow a wider
176         seqno range
177
178       • Sending confirmation of window receipt from the client to the  server
179         and re-sending unreceived windows
180
181       However,  the  current strategy means that a good approximation of per-
182       packet loss results can be obtained with only  8  additional  bytes  in
183       each  packet.   It  also requires very little computational time on the
184       server, and almost all computation on the client occurs during  results
185       generation, after the test is complete.  It isn’t as accurate with late
186       (out-of-order) upstream packets or with long sequences of lost packets,
187       but  high  loss or high numbers of late packets typically indicate more
188       severe network conditions that should be corrected first  anyway,  per‐
189       haps  before per-packet results matter.  Note that in case of very high
190       packet loss, the total number of packets received by the server but not
191       returned to the client (which can be obtained using --stats=count) will
192       still be correct, which will still provide  an  accurate  average  loss
193       percentage in each direction over the course of the test.
194
195   Use of Go
196       IRTT is written in Go.  That carries with it:
197
198       • Non-negligible system call overhead
199
200       • A larger executable size than with C
201
202       • Somewhat slower execution speed than C (although not that much slower
203         (https://benchmarksgame.alioth.debian.org/u64q/com
204         pare.php?lang=go&lang2=gcc))
205
206       However,  Go  also has characteristics that make it a good fit for this
207       application:
208
209       • Go’s target is network and server applications, with a focus on  sim‐
210         plicity, reliability and efficiency, which is appropriate for IRTT
211
212       • Memory  footprint  tends to be significantly lower than with some in‐
213         terpreted languages
214
215       • It’s easy to support a broad array of hardware and OS combinations
216

SEE ALSO

218       irtt-client(1), irtt-server(1)
219
220       IRTT GitHub repository (https://github.com/heistp/irtt/)
221

AUTHOR

223       Pete Heist <pete@heistp.net>
224
225       Many thanks to both Toke  Høiland-Jørgensen  and  Dave  Täht  from  the
226       Bufferbloat  project  (https://www.bufferbloat.net/) for their valuable
227       advice.  Any problems in design or implementation are entirely my own.
228

HISTORY

230       IRTT was originally written to improve the latency and packet loss mea‐
231       surements  for the excellent Flent (https://flent.org) tool.  Flent was
232       developed        by        and        for        the        Bufferbloat
233       (https://www.bufferbloat.net/projects/)  project,  which aims to reduce
234       “chaotic and laggy network performance,” making this  project  valuable
235       to anyone who values their time and sanity while using the Internet.
236
237
238
239v0.9.0                         February 11, 2018                       IRTT(1)
Impressum