1resperf(1)                  General Commands Manual                 resperf(1)
2
3
4

NAME

6       resperf - test the resolution performance of a caching DNS server
7

SYNOPSIS

9       resperf-report [-a local_addr] [-d datafile] [-R] [-M mode]
10       [-s server_addr] [-p port] [-x local_port] [-t timeout] [-b bufsize]
11       [-f family] [-e] [-D] [-y [alg:]name:secret] [-h] [-i interval]
12       [-m max_qps] [-r rampup_time] [-c constant_traffic_time] [-L max_loss]
13       [-C clients] [-q max_outstanding] [-F fall_behind] [-v] [-W]
14
15       resperf [-a local_addr] [-d datafile] [-R] [-M mode] [-s server_addr]
16       [-p port] [-x local_port] [-t timeout] [-b bufsize] [-f family] [-e]
17       [-D] [-y [alg:]name:secret] [-h] [-i interval] [-m max_qps]
18       [-P plot_data_file] [-r rampup_time] [-c constant_traffic_time]
19       [-L max_loss] [-C clients] [-q max_outstanding] [-F fall_behind] [-v]
20       [-W]
21

DESCRIPTION

23       resperf is a companion tool to dnsperf.  dnsperf was primarily designed
24       for  benchmarking authoritative servers, and it does not work well with
25       caching servers that are talking to the live Internet.  One reason  for
26       this  is  that dnsperf uses a "self-pacing" approach, which is based on
27       the assumption that you can keep the server 100% busy simply by sending
28       it  a  small  burst of back-to-back queries to fill up network buffers,
29       and then send a new query whenever you get a response back.   This  ap‐
30       proach works well for authoritative servers that process queries in or‐
31       der and one at a time; it also works pretty well for a  caching  server
32       in  a  closed  laboratory  environment  talking to a simulated Internet
33       that's all on the same LAN.  Unfortunately, it does not work well  with
34       a caching server talking to the actual Internet, which may need to work
35       on thousands of queries in parallel to achieve its maximum  throughput.
36       There  have  been numerous attempts to use dnsperf (or its predecessor,
37       queryperf) for benchmarking live caching servers, usually with poor re‐
38       sults.   Therefore,  a  separate tool designed specifically for caching
39       servers is needed.
40
41   How resperf works
42       Unlike the "self-pacing" approach of dnsperf, resperf works by  sending
43       DNS  queries  at  a  controlled, steadily increasing rate.  By default,
44       resperf will send traffic  for  60  seconds,  linearly  increasing  the
45       amount of traffic from zero to 100,000 queries per second (or max_qps).
46
47       During  the  test,  resperf  listens  for responses from the server and
48       keeps track of response rates, failure rates, and latencies.   It  will
49       also  continue listening for responses for an additional 40 seconds af‐
50       ter it has stopped sending traffic, so that there is time for the serv‐
51       er to respond to the last queries sent.  This time period was chosen to
52       be longer than the overall query timeout of both Nominum CacheServe and
53       current versions of BIND.
54
55       If the test is successful, the query rate will at some point exceed the
56       capacity of the server and queries will be  dropped,  causing  the  re‐
57       sponse rate to stop growing or even decrease as the query rate increas‐
58       es.
59
60       The result of the test is a set of measurements of the query rate,  re‐
61       sponse  rate, failure response rate, and average query latency as func‐
62       tions of time.
63
64   What you will need
65       Benchmarking a live caching server is serious business.  A fast caching
66       server  like  Nominum CacheServe, resolving a mix of cacheable and non-
67       cacheable queries typical of ISP customer traffic, is  capable  of  re‐
68       solving  well  over  1,000,000  queries per second.  In the process, it
69       will send more than 40,000 queries per second to authoritative  servers
70       on  the  Internet,  and receive responses to most of them.  Assuming an
71       average request size of 50 bytes and a response size of 150 bytes, this
72       amounts to some 1216 Mbps of outgoing and 448 Mbps of incoming traffic.
73       If your Internet connection can't handle the bandwidth, you will end up
74       measuring the speed of the connection, not the server, and may saturate
75       the connection causing a degradation in service for other users.
76
77       Make sure there is no stateful firewall between the server and the  In‐
78       ternet, because most of them can't handle the amount of UDP traffic the
79       test will generate and will end up dropping packets, skewing  the  test
80       results.  Some will even lock up or crash.
81
82       You  should  run  resperf  on  a machine separate from the server under
83       test, on the same LAN.  Preferably, this should be a  Gigabit  Ethernet
84       network.  The machine running resperf should be at least as fast as the
85       machine being tested; otherwise, it may end up being the bottleneck.
86
87       There should be no other applications running on  the  machine  running
88       resperf.   Performance testing at the traffic levels involved is essen‐
89       tially a hard real-time application - consider the fact that at a query
90       rate  of  100,000  queries  per second, if resperf gets delayed by just
91       1/100 of a second, 1000 incoming UDP packets will arrive in  the  mean‐
92       time.   This  is  more  than  most operating systems will buffer, which
93       means packets will be dropped.
94
95       Because the granularity of the timers provided by operating systems  is
96       typically  too  coarse  to  accurately schedule packet transmissions at
97       sub-millisecond intervals, resperf will busy-wait between packet trans‐
98       missions, constantly polling for responses in the meantime.  Therefore,
99       it is normal for resperf to consume 100% CPU during the whole test run,
100       even during periods where query rates are relatively low.
101
102       You  will  also  need a set of test queries in the dnsperf file format.
103       See the dnsperf man page for instructions  on  how  to  construct  this
104       query  file.   To  make  the test as realistic as possible, the queries
105       should be derived from recorded production client DNS traffic,  without
106       removing  duplicate  queries or other filtering.  With the default set‐
107       tings, resperf will use up to 3 million queries in each test run.
108
109       If the caching server to be tested has a configurable limit on the num‐
110       ber  of simultaneous resolutions, like the max-recursive-clients state‐
111       ment in Nominum CacheServe or the recursive-clients option in  BIND  9,
112       you  will probably have to increase it.  As a starting point, we recom‐
113       mend a value of 10000 for Nominum CacheServe and  100000  for  BIND  9.
114       Should  the  limit  be  reached, it will show up in the plots as an in‐
115       crease in the number of failure responses.
116
117       The server being tested should be restarted at the  beginning  of  each
118       test to make sure it is starting with an empty cache.  If the cache al‐
119       ready contains data from a previous test run that used the same set  of
120       queries,  almost  all queries will be answered from the cache, yielding
121       inflated performance numbers.
122
123       To use the resperf-report script, you need to have  gnuplot  installed.
124       Make  sure  your installed version of gnuplot supports the png terminal
125       driver.  If your gnuplot doesn't support png but does support gif,  you
126       can change the line saying terminal=png in the resperf-report script to
127       terminal=gif.
128
129   Running the test
130       resperf is typically invoked via the resperf-report script, which  will
131       run resperf with its output redirected to a file and then automatically
132       generate an illustrated report in HTML format.  Command line  arguments
133       given to resperf-report will be passed on unchanged to resperf.
134
135       When  running  resperf-report,  you  will  need to specify at least the
136       server IP address and the query data file.  A typical  invocation  will
137       look like
138
139              resperf-report -s 10.0.0.2 -d queryfile
140
141       With  default  settings, the test run will take at most 100 seconds (60
142       seconds of ramping up traffic and then 40 seconds of  waiting  for  re‐
143       sponses),  but in practice, the 60-second traffic phase will usually be
144       cut short.  To be precise, resperf can  transition  from  the  traffic-
145       sending  phase  to  the  waiting-for-responses phase in three different
146       ways:
147
148       • Running for the full allotted time and successfully reaching the max‐
149         imum  query rate (by default, 60 seconds and 100,000 qps, respective‐
150         ly).  Since this is a very high query rate, this will  rarely  happen
151         (with today's hardware); one of the other two conditions listed below
152         will usually occur first.
153
154       • Exceeding 65,536 outstanding queries.  This often happens as a result
155         of  (successfully) exceeding the capacity of the server being tested,
156         causing the excess queries  to  be  dropped.   The  limit  of  65,536
157         queries  comes from the number of possible values for the ID field in
158         the DNS packet.  resperf needs to allocate a unique ID for each  out‐
159         standing  query,  and  is therefore unable to send further queries if
160         the set of possible IDs is exhausted.
161
162       • When resperf finds itself unable to send queries fast  enough.   res‐
163         perf  will  notice  if  it  is  falling behind in its scheduled query
164         transmissions, and if this backlog  reaches  1000  queries,  it  will
165         print  a  message like "Fell behind by 1000 queries" (or whatever the
166         actual number is at the time) and stop sending traffic.
167
168       Regardless of which of the above conditions caused the  traffic-sending
169       phase  of  the  test  to end, you should examine the resulting plots to
170       make sure the server's response rate is flattening out toward  the  end
171       of the test.  If it is not, then you are not loading the server enough.
172       If you are getting the "Fell behind" message, make sure  that  the  ma‐
173       chine running resperf is fast enough and has no other applications run‐
174       ning.
175
176       You should also monitor the CPU usage of the  server  under  test.   It
177       should  reach  close to 100% CPU at the point of maximum traffic; if it
178       does not, you most likely have a bottleneck in some other part of  your
179       test setup, for example, your external Internet connection.
180
181       The  report  generated  by  resperf-report will be stored with a unique
182       file name based on the current date and time, e.g., 20060812-1550.html.
183       The PNG images of the plots and other auxiliary files will be stored in
184       separate files beginning with the same date-time string.  To  view  the
185       report, simply open the .html file in a web browser.
186
187       If  you need to copy the report to a separate machine for viewing, make
188       sure to copy the .png files along with the .html file (or  simply  copy
189       all the files, e.g., using scp 20060812-1550.* host:directory/).
190
191   Interpreting the report
192       The  .html  file  produced  by resperf-report consists of two sections.
193       The first section, "Resperf output", contains output from  the  resperf
194       program  such as progress messages, a summary of the command line argu‐
195       ments, and summary statistics.  The second section,  "Plots",  contains
196       two  plots generated by gnuplot: "Query/response/failure rate" and "La‐
197       tency".
198
199       The "Query/response/failure rate"  plot  contains  three  graphs.   The
200       "Queries  sent per second" graph shows the amount of traffic being sent
201       to the server; this should be very close to a straight  diagonal  line,
202       reflecting the linear ramp-up of traffic.
203
204       The  "Total  responses received per second" graph shows how many of the
205       queries received a response from the server.  All responses are  count‐
206       ed, whether successful (NOERROR or NXDOMAIN) or not (e.g., SERVFAIL).
207
208       The "Failure responses received per second" graph shows how many of the
209       queries received a failure response.  A response is considered to be  a
210       failure if its RCODE is neither NOERROR nor NXDOMAIN.
211
212       By visually inspecting the graphs, you can get an idea of how the serv‐
213       er behaves under increasing load.  The "Total  responses  received  per
214       second"  graph will initially closely follow the "Queries sent per sec‐
215       ond" graph (often rendering it invisible in the plot as the two  graphs
216       are plotted on top of one another), but when the load exceeds the serv‐
217       er's capacity, the "Total responses received per second" graph may  di‐
218       verge  from  the "Queries sent per second" graph and flatten out, indi‐
219       cating that some of the queries are being dropped.
220
221       The "Failure responses received per second" graph will normally show  a
222       roughly  linear  ramp  close to the bottom of the plot with some random
223       fluctuation, since typical query traffic will contain some  small  per‐
224       centage  of  failing  queries randomly interspersed with the successful
225       ones.  As the total traffic increases, the number of failures will  in‐
226       crease proportionally.
227
228       If  the "Failure responses received per second" graph turns sharply up‐
229       wards, this can be another indication that the load  has  exceeded  the
230       server's  capacity.   This will happen if the server reacts to overload
231       by sending SERVFAIL responses rather than by dropping  queries.   Since
232       Nominum CacheServe and BIND 9 will both respond with SERVFAIL when they
233       exceed their max-recursive-clients or recursive-clients limit,  respec‐
234       tively, a sudden increase in the number of failures could mean that the
235       limit needs to be increased.
236
237       The "Latency" plot contains a single graph  marked  "Average  latency".
238       This shows how the latency varies during the course of the test.  Typi‐
239       cally, the latency graph will exhibit a  downwards  trend  because  the
240       cache  hit  rate  improves as ever more responses are cached during the
241       test, and the latency for a cache hit is much smaller than for a  cache
242       miss.  The latency graph is provided as an aid in determining the point
243       where the server gets overloaded, which can be seen as a sharp  upwards
244       turn  in the graph.  The latency graph is not intended for making abso‐
245       lute latency measurements or comparisons between servers; the latencies
246       shown  in  the graph are not representative of production latencies due
247       to the initially empty cache and  the  deliberate  overloading  of  the
248       server towards the end of the test.
249
250       Note  that all measurements are displayed on the plot at the horizontal
251       position corresponding to the point in time when the  query  was  sent,
252       not  when the response (if any) was received.  This makes it it easy to
253       compare the query and response rates; for example, if  no  queries  are
254       dropped,  the  query and response graphs will be identical.  As another
255       example, if the plot shows 10% failure responses at t=5  seconds,  this
256       means  that  10%  of the queries sent at t=5 seconds eventually failed,
257       not that 10% of the responses received at t=5 seconds were failures.
258
259   Determining the server's maximum throughput
260       Often, the goal of running resperf is to determine the server's maximum
261       throughput,  in other words, the number of queries per second it is ca‐
262       pable of handling.  This is not always an easy task, because as a serv‐
263       er  is  driven  into  overload, the service it provides may deteriorate
264       gradually, and this deterioration can manifest itself either as queries
265       being  dropped,  as an increase in the number of SERVFAIL responses, or
266       an increase in latency.  The maximum throughput may be defined  as  the
267       highest  level of traffic at which the server still provides an accept‐
268       able level of service, but that means you first need to decide what  an
269       acceptable  level  of service means in terms of packet drop percentage,
270       SERVFAIL percentage, and latency.
271
272       The summary statistics in the "Resperf output" section  of  the  report
273       contains  a  "Maximum  throughput" value which by default is determined
274       from the maximum rate at which the server was able to return responses,
275       without  regard  to  the  number of queries being dropped or failing at
276       that point.  This method of throughput measurement has the advantage of
277       simplicity,  but  it  may or may not be appropriate for your needs; the
278       reported value should always be validated by a visual inspection of the
279       graphs to ensure that service has not already deteriorated unacceptably
280       before the maximum response rate is reached.  It may also be helpful to
281       look  at the "Lost at that point" value in the summary statistics; this
282       indicates the percentage of the queries that was being dropped  at  the
283       point in the test when the maximum throughput was reached.
284
285       Alternatively,  you can make resperf report the throughput at the point
286       in the test where the percentage of queries  dropped  exceeds  a  given
287       limit  (or  the maximum as above if the limit is never exceeded).  This
288       can be a more realistic indication of how much the server can be loaded
289       while still providing an acceptable level of service.  This is done us‐
290       ing the -L command line option; for example,  specifying  -L  10  makes
291       resperf  report the highest throughput reached before the server starts
292       dropping more than 10% of the queries.
293
294       There is no corresponding way  of  automatically  constraining  results
295       based  on the number of failed queries, because unlike dropped queries,
296       resolution failures will occur even when the the server  is  not  over‐
297       loaded,  and  the  number  of such failures is heavily dependent on the
298       query data and network conditions.  Therefore, the plots should be man‐
299       ually inspected to ensure that there is not an abnormal number of fail‐
300       ures.
301

GENERATING CONSTANT TRAFFIC

303       In addition to ramping up traffic linearly, resperf also has the  capa‐
304       bility  to  send a constant stream of traffic.  This can be useful when
305       using resperf for tasks other than performance measurement;  for  exam‐
306       ple,  it can be used to "soak test" a server by subjecting it to a sus‐
307       tained load for an extended period of time.
308
309       To generate a constant traffic load, use the -c  command  line  option,
310       together  with the -m option which specifies the desired constant query
311       rate.  For example, to send 10000 queries per second for an  hour,  use
312       -m  10000 -c 3600.  This will include the usual 30-second gradual ramp-
313       up of traffic at the beginning, which may be useful to avoid  initially
314       overwhelming  a  server that is starting with an empty cache.  To start
315       the onslaught of traffic instantly, use -m 10000 -c 3600 -r 0.
316
317       To be precise, resperf will do a linear ramp-up of traffic from 0 to -m
318       queries  per  second over a period of -r seconds, followed by a plateau
319       of steady traffic at -m queries per second lasting for -c seconds, fol‐
320       lowed  by  waiting  for  responses for an extra 40 seconds.  Either the
321       ramp-up or the plateau can be suppressed by supplying a duration of ze‐
322       ro  seconds  with  -r  0 and -c 0, respectively.  The latter is the de‐
323       fault.
324
325       Sending traffic at high rates for hours on end will of  course  require
326       very  large amounts of input data.  Also, a long-running test will gen‐
327       erate a large amount of plot data, which is kept in memory for the  du‐
328       ration  of  the  test.   To reduce the memory usage and the size of the
329       plot file, consider increasing the interval between  measurements  from
330       the default of 0.5 seconds using the -i option in long-running tests.
331
332       When  using  resperf  for  long-running tests, it is important that the
333       traffic rate specified using the -m is one that both resperf itself and
334       the server under test can sustain.  Otherwise, the test is likely to be
335       cut short as a result of either running out of query  IDs  (because  of
336       large  numbers  of  dropped  queries)  or of resperf falling behind its
337       transmission schedule.
338

OPTIONS

340       Because the resperf-report script passes its command line  options  di‐
341       rectly  to  the  resperf programs, they both accept the same set of op‐
342       tions, with one exception: resperf-report automatically adds an  appro‐
343       priate  -P  to  the resperf command line, and therefore does not itself
344       take a -P option.
345
346       -d datafile
347              Specifies the input data file.  If not specified,  resperf  will
348              read from standard input.
349
350       -R
351              Reopen the datafile if it runs out of data before the testing is
352              completed.  This allows for long running tests on very small and
353              simple query datafile.
354
355       -M mode
356              Specifies the transport mode to use, "udp", "tcp" or "dot".  De‐
357              fault is "udp".
358
359       -s server_addr
360              Specifies the name or address of the server  to  which  requests
361              will be sent.  The default is the loopback address, 127.0.0.1.
362
363       -p port
364              Sets  the port on which the DNS packets are sent.  If not speci‐
365              fied, the standard DNS port (udp/tcp 53, DoT 853) is used.
366
367       -a local_addr
368              Specifies the local address from which to  send  requests.   The
369              default is the wildcard address.
370
371       -x local_port
372              Specifies  the  local port from which to send requests.  The de‐
373              fault is the wildcard port (0).
374
375              If acting as multiple clients and the  wildcard  port  is  used,
376              each  client  will  use  a  different random port.  If a port is
377              specified, the clients will use a range of ports  starting  with
378              the specified one.
379
380       -t timeout
381              Specifies  the  request timeout value, in seconds.  resperf will
382              no longer wait for a response to a particular request after this
383              many seconds have elapsed.  The default is 45 seconds.
384
385              resperf  times out unanswered requests in order to reclaim query
386              IDs so that the query ID space will not be exhausted in a  long-
387              running  test,  such  as when "soak testing" a server for an day
388              with -m 10000 -c 86400.  The timeouts and the  ability  to  tune
389              them are of little use in the more typical use case of a perfor‐
390              mance test lasting only a minute or two.
391
392              The default timeout of 45 seconds was chosen to be  longer  than
393              the query timeout of current caching servers.  Note that this is
394              longer  than  the  corresponding  default  in  dnsperf,  because
395              caching  servers can take many orders of magnitude longer to an‐
396              swer a query than authoritative servers do.
397
398              If a short timeout is used, there is a possibility that  resperf
399              will  receive  a  response  after  the corresponding request has
400              timed out; in this case, a message like Warning: Received a  re‐
401              sponse with an unexpected id: 141 will be printed.
402
403       -b bufsize
404              Sets the size of the socket's send and receive buffers, in kilo‐
405              bytes.  If not specified,  the  operating  system's  default  is
406              used.
407
408       -f family
409              Specifies  the address family used for sending DNS packets.  The
410              possible values are "inet", "inet6", or "any".   If  "any"  (the
411              default  value) is specified, resperf will use whichever address
412              family is appropriate for the server it is sending packets to.
413
414       -e
415              Enables EDNS0 [RFC2671], by adding an OPT record to all  packets
416              sent.
417
418       -D
419              Sets the DO (DNSSEC OK) bit [RFC3225] in all packets sent.  This
420              also enables EDNS0, which is required for DNSSEC.
421
422       -y [alg:]name:secret
423              Add a TSIG record [RFC2845] to all packets sent, using the spec‐
424              ified  TSIG  key algorithm, name and secret, where the algorithm
425              defaults to hmac-md5 and the secret is expressed  as  a  base-64
426              encoded string.
427
428       -h
429              Print a usage statement and exit.
430
431       -i interval
432              Specifies  the  time  interval  between  data points in the plot
433              file.  The default is 0.5 seconds.
434
435       -m max_qps
436              Specifies the target maximum query rate (in queries per second).
437              This  should  be  higher than the expected maximum throughput of
438              the server being tested.  Traffic will be ramped up at a linear‐
439              ly  increasing rate until this value is reached, or until one of
440              the other conditions described in the section "Running the test"
441              occurs.  The default is 100000 queries per second.
442
443       -P plot_data_file
444              Specifies  the  name of the plot data file.  The default is res‐
445              perf.gnuplot.
446
447       -r rampup_time
448              Specifies the length of time over which traffic will  be  ramped
449              up.  The default is 60 seconds.
450
451       -c constant_traffic_time
452              Specifies the length of time for which traffic will be sent at a
453              constant rate following the initial ramp-up.  The default  is  0
454              seconds,  meaning  no sending of traffic at a constant rate will
455              be done.
456
457       -L max_loss
458              Specifies the maximum acceptable query loss percentage for  pur‐
459              poses  of determining the maximum throughput value.  The default
460              is 100%, meaning that resperf will measure the maximum  through‐
461              put without regard to query loss.
462
463       -C clients
464              Act  as multiple clients.  Requests are sent from multiple sock‐
465              ets.  The default is to act as 1 client.
466
467       -q max_outstanding
468              Sets the maximum number of outstanding requests.   resperf  will
469              stop  ramping up traffic when this many queries are outstanding.
470              The default is 64k, and the limit is 64k per client.
471
472       -F fall_behind
473              Sets the maximum number of queries that can  fall  behind  being
474              sent.  resperf will stop when this many queries should have been
475              sent and it can be relative easy to hit if max_qps  is  set  too
476              high.   The  default is 1000 and setting it to zero (0) disables
477              the check.
478
479       -v
480              Enables verbose mode to report about network readiness and  con‐
481              gestion.
482
483       -W
484              Log  warnings  and errors to standard output instead of standard
485              error making it easier for script, test and automation  to  cap‐
486              ture all output.
487

THE PLOT DATA FILE

489       The  plot  data file is written by the resperf program and contains the
490       data to be plotted using gnuplot.  When running resperf  via  the  res‐
491       perf-report  script,  there  is  no need for the user to deal with this
492       file directly, but its format and contents are documented here for com‐
493       pleteness and in case you wish to run resperf directly and use its out‐
494       put for purposes other than viewing it with gnuplot.
495
496       The first line of the file is a comment identifying the fields.  It may
497       be recognized as a comment by its leading hash sign (#).
498
499       Subsequent  lines contain the actual plot data.  For purposes of gener‐
500       ating the plot data file, the test run is divided into  time  intervals
501       of 0.5 seconds (or some other length of time specified with the -i com‐
502       mand line option).  Each line corresponds to  one  such  interval,  and
503       contains the following values as floating-point numbers:
504
505       Time
506              The  midpoint of this time interval, in seconds since the begin‐
507              ning of the run
508
509       Target queries per second
510              The number of queries per second scheduled to be  sent  in  this
511              time interval
512
513       Actual queries per second
514              The  number of queries per second actually sent in this time in‐
515              terval
516
517       Responses per second
518              The number of responses received corresponding to  queries  sent
519              in this time interval, divided by the length of the interval
520
521       Failures per second
522              The  number  of responses received corresponding to queries sent
523              in this time interval and having an RCODE other than NOERROR  or
524              NXDOMAIN, divided by the length of the interval
525
526       Average latency
527              The  average  time between sending the query and receiving a re‐
528              sponse, for queries sent in this time interval
529
530       Connections
531              The number of connections done, including re-connections, during
532              this time interval.  This is only relevant to connection orient‐
533              ed protocols, such as TCP and DoT.
534
535       Average connection latency
536              The average time between starting to connect and having the con‐
537              nection  ready  for  sending queries to, for this time interval.
538              This is only relevant to connection oriented protocols, such  as
539              TCP and DoT.
540
541

SEE ALSO

543       dnsperf(1)
544

AUTHOR

546       Nominum, Inc.
547
548       Maintained by DNS-OARC
549
550              https://www.dns-oarc.net/
551

BUGS

553       For issues and feature requests please use:
554
555              https://github.com/DNS-OARC/dnsperf/issues
556
557       For question and help please use:
558
559              admin@dns-oarc.net
560
561resperf                              2.6.0                          resperf(1)
Impressum