1PGBENCH(1)               PostgreSQL 10.7 Documentation              PGBENCH(1)
2
3
4

NAME

6       pgbench - run a benchmark test on PostgreSQL
7

SYNOPSIS

9       pgbench -i [option...] [dbname]
10
11       pgbench [option...] [dbname]
12

DESCRIPTION

14       pgbench is a simple program for running benchmark tests on PostgreSQL.
15       It runs the same sequence of SQL commands over and over, possibly in
16       multiple concurrent database sessions, and then calculates the average
17       transaction rate (transactions per second). By default, pgbench tests a
18       scenario that is loosely based on TPC-B, involving five SELECT, UPDATE,
19       and INSERT commands per transaction. However, it is easy to test other
20       cases by writing your own transaction script files.
21
22       Typical output from pgbench looks like:
23
24           transaction type: <builtin: TPC-B (sort of)>
25           scaling factor: 10
26           query mode: simple
27           number of clients: 10
28           number of threads: 1
29           number of transactions per client: 1000
30           number of transactions actually processed: 10000/10000
31           tps = 85.184871 (including connections establishing)
32           tps = 85.296346 (excluding connections establishing)
33
34       The first six lines report some of the most important parameter
35       settings. The next line reports the number of transactions completed
36       and intended (the latter being just the product of number of clients
37       and number of transactions per client); these will be equal unless the
38       run failed before completion. (In -T mode, only the actual number of
39       transactions is printed.) The last two lines report the number of
40       transactions per second, figured with and without counting the time to
41       start database sessions.
42
43       The default TPC-B-like transaction test requires specific tables to be
44       set up beforehand.  pgbench should be invoked with the -i (initialize)
45       option to create and populate these tables. (When you are testing a
46       custom script, you don't need this step, but will instead need to do
47       whatever setup your test needs.) Initialization looks like:
48
49           pgbench -i [ other-options ] dbname
50
51       where dbname is the name of the already-created database to test in.
52       (You may also need -h, -p, and/or -U options to specify how to connect
53       to the database server.)
54
55           Caution
56           pgbench -i creates four tables pgbench_accounts, pgbench_branches,
57           pgbench_history, and pgbench_tellers, destroying any existing
58           tables of these names. Be very careful to use another database if
59           you have tables having these names!
60
61       At the default “scale factor” of 1, the tables initially contain this
62       many rows:
63
64           table                   # of rows
65           ---------------------------------
66           pgbench_branches        1
67           pgbench_tellers         10
68           pgbench_accounts        100000
69           pgbench_history         0
70
71       You can (and, for most purposes, probably should) increase the number
72       of rows by using the -s (scale factor) option. The -F (fillfactor)
73       option might also be used at this point.
74
75       Once you have done the necessary setup, you can run your benchmark with
76       a command that doesn't include -i, that is
77
78           pgbench [ options ] dbname
79
80       In nearly all cases, you'll need some options to make a useful test.
81       The most important options are -c (number of clients), -t (number of
82       transactions), -T (time limit), and -f (specify a custom script file).
83       See below for a full list.
84

OPTIONS

86       The following is divided into three subsections: Different options are
87       used during database initialization and while running benchmarks, some
88       options are useful in both cases.
89
90   Initialization Options
91       pgbench accepts the following command-line initialization arguments:
92
93       -i
94       --initialize
95           Required to invoke initialization mode.
96
97       -F fillfactor
98       --fillfactor=fillfactor
99           Create the pgbench_accounts, pgbench_tellers and pgbench_branches
100           tables with the given fillfactor. Default is 100.
101
102       -n
103       --no-vacuum
104           Perform no vacuuming after initialization.
105
106       -q
107       --quiet
108           Switch logging to quiet mode, producing only one progress message
109           per 5 seconds. The default logging prints one message each 100000
110           rows, which often outputs many lines per second (especially on good
111           hardware).
112
113       -s scale_factor
114       --scale=scale_factor
115           Multiply the number of rows generated by the scale factor. For
116           example, -s 100 will create 10,000,000 rows in the pgbench_accounts
117           table. Default is 1. When the scale is 20,000 or larger, the
118           columns used to hold account identifiers (aid columns) will switch
119           to using larger integers (bigint), in order to be big enough to
120           hold the range of account identifiers.
121
122       --foreign-keys
123           Create foreign key constraints between the standard tables.
124
125       --index-tablespace=index_tablespace
126           Create indexes in the specified tablespace, rather than the default
127           tablespace.
128
129       --tablespace=tablespace
130           Create tables in the specified tablespace, rather than the default
131           tablespace.
132
133       --unlogged-tables
134           Create all tables as unlogged tables, rather than permanent tables.
135
136   Benchmarking Options
137       pgbench accepts the following command-line benchmarking arguments:
138
139       -b scriptname[@weight]
140       --builtin=scriptname[@weight]
141           Add the specified built-in script to the list of executed scripts.
142           An optional integer weight after @ allows to adjust the probability
143           of drawing the script. If not specified, it is set to 1. Available
144           built-in scripts are: tpcb-like, simple-update and select-only.
145           Unambiguous prefixes of built-in names are accepted. With special
146           name list, show the list of built-in scripts and exit immediately.
147
148       -c clients
149       --client=clients
150           Number of clients simulated, that is, number of concurrent database
151           sessions. Default is 1.
152
153       -C
154       --connect
155           Establish a new connection for each transaction, rather than doing
156           it just once per client session. This is useful to measure the
157           connection overhead.
158
159       -d
160       --debug
161           Print debugging output.
162
163       -D varname=value
164       --define=varname=value
165           Define a variable for use by a custom script (see below). Multiple
166           -D options are allowed.
167
168       -f filename[@weight]
169       --file=filename[@weight]
170           Add a transaction script read from filename to the list of executed
171           scripts. An optional integer weight after @ allows to adjust the
172           probability of drawing the test. See below for details.
173
174       -j threads
175       --jobs=threads
176           Number of worker threads within pgbench. Using more than one thread
177           can be helpful on multi-CPU machines. Clients are distributed as
178           evenly as possible among available threads. Default is 1.
179
180       -l
181       --log
182           Write information about each transaction to a log file. See below
183           for details.
184
185       -L limit
186       --latency-limit=limit
187           Transaction which last more than limit milliseconds are counted and
188           reported separately, as late.
189
190           When throttling is used (--rate=...), transactions that lag behind
191           schedule by more than limit ms, and thus have no hope of meeting
192           the latency limit, are not sent to the server at all. They are
193           counted and reported separately as skipped.
194
195       -M querymode
196       --protocol=querymode
197           Protocol to use for submitting queries to the server:
198
199           ·   simple: use simple query protocol.
200
201           ·   extended: use extended query protocol.
202
203           ·   prepared: use extended query protocol with prepared statements.
204
205           The default is simple query protocol. (See Chapter 52 for more
206           information.)
207
208       -n
209       --no-vacuum
210           Perform no vacuuming before running the test. This option is
211           necessary if you are running a custom test scenario that does not
212           include the standard tables pgbench_accounts, pgbench_branches,
213           pgbench_history, and pgbench_tellers.
214
215       -N
216       --skip-some-updates
217           Run built-in simple-update script. Shorthand for -b simple-update.
218
219       -P sec
220       --progress=sec
221           Show progress report every sec seconds. The report includes the
222           time since the beginning of the run, the tps since the last report,
223           and the transaction latency average and standard deviation since
224           the last report. Under throttling (-R), the latency is computed
225           with respect to the transaction scheduled start time, not the
226           actual transaction beginning time, thus it also includes the
227           average schedule lag time.
228
229       -r
230       --report-latencies
231           Report the average per-statement latency (execution time from the
232           perspective of the client) of each command after the benchmark
233           finishes. See below for details.
234
235       -R rate
236       --rate=rate
237           Execute transactions targeting the specified rate instead of
238           running as fast as possible (the default). The rate is given in
239           transactions per second. If the targeted rate is above the maximum
240           possible rate, the rate limit won't impact the results.
241
242           The rate is targeted by starting transactions along a
243           Poisson-distributed schedule time line. The expected start time
244           schedule moves forward based on when the client first started, not
245           when the previous transaction ended. That approach means that when
246           transactions go past their original scheduled end time, it is
247           possible for later ones to catch up again.
248
249           When throttling is active, the transaction latency reported at the
250           end of the run is calculated from the scheduled start times, so it
251           includes the time each transaction had to wait for the previous
252           transaction to finish. The wait time is called the schedule lag
253           time, and its average and maximum are also reported separately. The
254           transaction latency with respect to the actual transaction start
255           time, i.e. the time spent executing the transaction in the
256           database, can be computed by subtracting the schedule lag time from
257           the reported latency.
258
259           If --latency-limit is used together with --rate, a transaction can
260           lag behind so much that it is already over the latency limit when
261           the previous transaction ends, because the latency is calculated
262           from the scheduled start time. Such transactions are not sent to
263           the server, but are skipped altogether and counted separately.
264
265           A high schedule lag time is an indication that the system cannot
266           process transactions at the specified rate, with the chosen number
267           of clients and threads. When the average transaction execution time
268           is longer than the scheduled interval between each transaction,
269           each successive transaction will fall further behind, and the
270           schedule lag time will keep increasing the longer the test run is.
271           When that happens, you will have to reduce the specified
272           transaction rate.
273
274       -s scale_factor
275       --scale=scale_factor
276           Report the specified scale factor in pgbench's output. With the
277           built-in tests, this is not necessary; the correct scale factor
278           will be detected by counting the number of rows in the
279           pgbench_branches table. However, when testing only custom
280           benchmarks (-f option), the scale factor will be reported as 1
281           unless this option is used.
282
283       -S
284       --select-only
285           Run built-in select-only script. Shorthand for -b select-only.
286
287       -t transactions
288       --transactions=transactions
289           Number of transactions each client runs. Default is 10.
290
291       -T seconds
292       --time=seconds
293           Run the test for this many seconds, rather than a fixed number of
294           transactions per client.  -t and -T are mutually exclusive.
295
296       -v
297       --vacuum-all
298           Vacuum all four standard tables before running the test. With
299           neither -n nor -v, pgbench will vacuum the pgbench_tellers and
300           pgbench_branches tables, and will truncate pgbench_history.
301
302       --aggregate-interval=seconds
303           Length of aggregation interval (in seconds). May be used only with
304           -l option. With this option, the log contains per-interval summary
305           data, as described below.
306
307       --log-prefix=prefix
308           Set the filename prefix for the log files created by --log. The
309           default is pgbench_log.
310
311       --progress-timestamp
312           When showing progress (option -P), use a timestamp (Unix epoch)
313           instead of the number of seconds since the beginning of the run.
314           The unit is in seconds, with millisecond precision after the dot.
315           This helps compare logs generated by various tools.
316
317       --sampling-rate=rate
318           Sampling rate, used when writing data into the log, to reduce the
319           amount of log generated. If this option is given, only the
320           specified fraction of transactions are logged. 1.0 means all
321           transactions will be logged, 0.05 means only 5% of the transactions
322           will be logged.
323
324           Remember to take the sampling rate into account when processing the
325           log file. For example, when computing tps values, you need to
326           multiply the numbers accordingly (e.g. with 0.01 sample rate,
327           you'll only get 1/100 of the actual tps).
328
329   Common Options
330       pgbench accepts the following command-line common arguments:
331
332       -h hostname
333       --host=hostname
334           The database server's host name
335
336       -p port
337       --port=port
338           The database server's port number
339
340       -U login
341       --username=login
342           The user name to connect as
343
344       -V
345       --version
346           Print the pgbench version and exit.
347
348       -?
349       --help
350           Show help about pgbench command line arguments, and exit.
351

NOTES

353   What is the “Transaction” Actually Performed in pgbench?
354       pgbench executes test scripts chosen randomly from a specified list.
355       They include built-in scripts with -b and user-provided custom scripts
356       with -f. Each script may be given a relative weight specified after a @
357       so as to change its drawing probability. The default weight is 1.
358       Scripts with a weight of 0 are ignored.
359
360       The default built-in transaction script (also invoked with -b
361       tpcb-like) issues seven commands per transaction over randomly chosen
362       aid, tid, bid and balance. The scenario is inspired by the TPC-B
363       benchmark, but is not actually TPC-B, hence the name.
364
365        1. BEGIN;
366
367        2. UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid
368           = :aid;
369
370        3. SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
371
372        4. UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid =
373           :tid;
374
375        5. UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid
376           = :bid;
377
378        6. INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES
379           (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
380
381        7. END;
382
383       If you select the simple-update built-in (also -N), steps 4 and 5
384       aren't included in the transaction. This will avoid update contention
385       on these tables, but it makes the test case even less like TPC-B.
386
387       If you select the select-only built-in (also -S), only the SELECT is
388       issued.
389
390   Custom Scripts
391       pgbench has support for running custom benchmark scenarios by replacing
392       the default transaction script (described above) with a transaction
393       script read from a file (-f option). In this case a “transaction”
394       counts as one execution of a script file.
395
396       A script file contains one or more SQL commands terminated by
397       semicolons. Empty lines and lines beginning with -- are ignored. Script
398       files can also contain “meta commands”, which are interpreted by
399       pgbench itself, as described below.
400
401           Note
402           Before PostgreSQL 9.6, SQL commands in script files were terminated
403           by newlines, and so they could not be continued across lines. Now a
404           semicolon is required to separate consecutive SQL commands (though
405           a SQL command does not need one if it is followed by a meta
406           command). If you need to create a script file that works with both
407           old and new versions of pgbench, be sure to write each SQL command
408           on a single line ending with a semicolon.
409
410       There is a simple variable-substitution facility for script files.
411       Variables can be set by the command-line -D option, explained above, or
412       by the meta commands explained below. In addition to any variables
413       preset by -D command-line options, there are a few variables that are
414       preset automatically, listed in Table 241. A value specified for these
415       variables using -D takes precedence over the automatic presets. Once
416       set, a variable's value can be inserted into a SQL command by writing
417       :variablename. When running more than one client session, each session
418       has its own set of variables.
419
420       Table 241. Automatic Variables
421       ┌──────────┬────────────────────────────┐
422Variable  Description                
423       ├──────────┼────────────────────────────┤
424       │scale     │ current scale factor       │
425       ├──────────┼────────────────────────────┤
426       │client_id │ unique number identifying  │
427       │          │ the client session (starts │
428       │          │ from zero)                 │
429       └──────────┴────────────────────────────┘
430
431       Script file meta commands begin with a backslash (\) and normally
432       extend to the end of the line, although they can be continued to
433       additional lines by writing backslash-return. Arguments to a meta
434       command are separated by white space. These meta commands are
435       supported:
436
437       \set varname expression
438           Sets variable varname to a value calculated from expression. The
439           expression may contain integer constants such as 5432, double
440           constants such as 3.14159, references to variables :variablename,
441           unary operators (+, -) and binary operators (+, -, *, /, %) with
442           their usual precedence and associativity, function calls, and
443           parentheses.
444
445           Examples:
446
447               \set ntellers 10 * :scale
448               \set aid (1021 * random(1, 100000 * :scale)) % \
449                          (100000 * :scale) + 1
450
451       \sleep number [ us | ms | s ]
452           Causes script execution to sleep for the specified duration in
453           microseconds (us), milliseconds (ms) or seconds (s). If the unit is
454           omitted then seconds are the default.  number can be either an
455           integer constant or a :variablename reference to a variable having
456           an integer value.
457
458           Example:
459
460               \sleep 10 ms
461
462       \setshell varname command [ argument ... ]
463           Sets variable varname to the result of the shell command command
464           with the given argument(s). The command must return an integer
465           value through its standard output.
466
467           command and each argument can be either a text constant or a
468           :variablename reference to a variable. If you want to use an
469           argument starting with a colon, write an additional colon at the
470           beginning of argument.
471
472           Example:
473
474               \setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
475
476       \shell command [ argument ... ]
477           Same as \setshell, but the result of the command is discarded.
478
479           Example:
480
481               \shell command literal_argument :variable ::literal_starting_with_colon
482
483   Built-In Functions
484       The functions listed in Table 242 are built into pgbench and may be
485       used in expressions appearing in \set.
486
487       Table 242. pgbench Functions
488       ┌───────────────────────┬───────────────┬───────────────────────────┬───────────────────────┬────────────────────────┐
489Function               Return Type   Description               Example               Result                 
490       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
491abs(a)                 │ same as a     │ absolute                  │ abs(-17)              │ 17                     │
492       │                       │               │ value                     │                       │                        │
493       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
494debug(a)               │ same as a     │ print a to                │ debug(5432.1)         │ 5432.1                 │
495       │                       │               │ stderr,                   │                       │                        │
496       │                       │               │         and               │                       │                        │
497       │                       │               │ return a                  │                       │                        │
498       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
499double(i)              │ double        │ cast to                   │ double(5432)          │ 5432.0                 │
500       │                       │               │ double                    │                       │                        │
501       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
502greatest(a [,          │ double if any │ largest value             │ greatest(5,           │ 5                      │
503... ] )                a is double,  │ among                     │ 4, 3, 2)              │                        │
504       │                       │ else integer  │ arguments                 │                       │                        │
505       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
506int(x)                 │ integer       │ cast to int               │ int(5.4 +             │ 9                      │
507       │                       │               │                           │ 3.8)                  │                        │
508       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
509least(a [,             │ double if any │ smallest                  │ least(5, 4,           │ 2.1                    │
510... ] )                a is double,  │ value among               │ 3, 2.1)               │                        │
511       │                       │ else integer  │ arguments                 │                       │                        │
512       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
513pi()                   │ double        │ value of the              │ pi()                  │ 3.14159265358979323846 │
514       │                       │               │ constant PI               │                       │                        │
515       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
516random(lb,             │ integer       │ uniformly-distributed     │ random(1, 10)         │ an integer between 1   │
517ub)                    │               │ random                    │                       │ and 10                 │
518       │                       │               │ integer in                │                       │                        │
519       │                       │               │ [lb, ub]                  │                       │                        │
520       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
521random_exponential(lb, │ integer       │ exponentially-distributed │ random_exponential(1, │ an integer between 1   │
522ub,                    │               │ random integer in         │ 10, 3.0)              │ and 10                 │
523parameter)             │               │ [lb, ub],                 │                       │                        │
524       │                       │               │               see         │                       │                        │
525       │                       │               │ below                     │                       │                        │
526       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
527random_gaussian(lb,    │ integer       │ Gaussian-distributed      │ random_gaussian(1,    │ an integer between 1   │
528ub, parameter)         │               │ random integer in [lb,    │ 10, 2.5)              │ and 10                 │
529       │                       │               │ ub],                      │                       │                        │
530       │                       │               │               see below   │                       │                        │
531       ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
532sqrt(x)                │ double        │ square root               │ sqrt(2.0)             │ 1.414213562            │
533       └───────────────────────┴───────────────┴───────────────────────────┴───────────────────────┴────────────────────────┘
534
535       The random function generates values using a uniform distribution, that
536       is all the values are drawn within the specified range with equal
537       probability. The random_exponential and random_gaussian functions
538       require an additional double parameter which determines the precise
539       shape of the distribution.
540
541       ·   For an exponential distribution, parameter controls the
542           distribution by truncating a quickly-decreasing exponential
543           distribution at parameter, and then projecting onto integers
544           between the bounds. To be precise, with
545
546               f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter))
547
548           Then value i between min and max inclusive is drawn with
549           probability: f(i) - f(i + 1).
550
551           Intuitively, the larger the parameter, the more frequently values
552           close to min are accessed, and the less frequently values close to
553           max are accessed. The closer to 0 parameter is, the flatter (more
554           uniform) the access distribution. A crude approximation of the
555           distribution is that the most frequent 1% values in the range,
556           close to min, are drawn parameter% of the time. The parameter value
557           must be strictly positive.
558
559       ·   For a Gaussian distribution, the interval is mapped onto a standard
560           normal distribution (the classical bell-shaped Gaussian curve)
561           truncated at -parameter on the left and +parameter on the right.
562           Values in the middle of the interval are more likely to be drawn.
563           To be precise, if PHI(x) is the cumulative distribution function of
564           the standard normal distribution, with mean mu defined as (max +
565           min) / 2.0, with
566
567               f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
568                      (2.0 * PHI(parameter) - 1)
569
570           then value i between min and max inclusive is drawn with
571           probability: f(i + 0.5) - f(i - 0.5). Intuitively, the larger the
572           parameter, the more frequently values close to the middle of the
573           interval are drawn, and the less frequently values close to the min
574           and max bounds. About 67% of values are drawn from the middle 1.0 /
575           parameter, that is a relative 0.5 / parameter around the mean, and
576           95% in the middle 2.0 / parameter, that is a relative 1.0 /
577           parameter around the mean; for instance, if parameter is 4.0, 67%
578           of values are drawn from the middle quarter (1.0 / 4.0) of the
579           interval (i.e. from 3.0 / 8.0 to 5.0 / 8.0) and 95% from the middle
580           half (2.0 / 4.0) of the interval (second and third quartiles). The
581           minimum parameter is 2.0 for performance of the Box-Muller
582           transform.
583
584       As an example, the full definition of the built-in TPC-B-like
585       transaction is:
586
587           \set aid random(1, 100000 * :scale)
588           \set bid random(1, 1 * :scale)
589           \set tid random(1, 10 * :scale)
590           \set delta random(-5000, 5000)
591           BEGIN;
592           UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
593           SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
594           UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
595           UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
596           INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
597           END;
598
599       This script allows each iteration of the transaction to reference
600       different, randomly-chosen rows. (This example also shows why it's
601       important for each client session to have its own variables — otherwise
602       they'd not be independently touching different rows.)
603
604   Per-Transaction Logging
605       With the -l option (but without the --aggregate-interval option),
606       pgbench writes information about each transaction to a log file. The
607       log file will be named prefix.nnn, where prefix defaults to
608       pgbench_log, and nnn is the PID of the pgbench process. The prefix can
609       be changed by using the --log-prefix option. If the -j option is 2 or
610       higher, so that there are multiple worker threads, each will have its
611       own log file. The first worker will use the same name for its log file
612       as in the standard single worker case. The additional log files for the
613       other workers will be named prefix.nnn.mmm, where mmm is a sequential
614       number for each worker starting with 1.
615
616       The format of the log is:
617
618           client_id transaction_no time script_no time_epoch time_us [ schedule_lag ]
619
620       where client_id indicates which client session ran the transaction,
621       transaction_no counts how many transactions have been run by that
622       session, time is the total elapsed transaction time in microseconds,
623       script_no identifies which script file was used (useful when multiple
624       scripts were specified with -f or -b), and time_epoch/time_us are a
625       Unix-epoch time stamp and an offset in microseconds (suitable for
626       creating an ISO 8601 time stamp with fractional seconds) showing when
627       the transaction completed. The schedule_lag field is the difference
628       between the transaction's scheduled start time, and the time it
629       actually started, in microseconds. It is only present when the --rate
630       option is used. When both --rate and --latency-limit are used, the time
631       for a skipped transaction will be reported as skipped.
632
633       Here is a snippet of a log file generated in a single-client run:
634
635           0 199 2241 0 1175850568 995598
636           0 200 2465 0 1175850568 998079
637           0 201 2513 0 1175850569 608
638           0 202 2038 0 1175850569 2663
639
640       Another example with --rate=100 and --latency-limit=5 (note the
641       additional schedule_lag column):
642
643           0 81 4621 0 1412881037 912698 3005
644           0 82 6173 0 1412881037 914578 4304
645           0 83 skipped 0 1412881037 914578 5217
646           0 83 skipped 0 1412881037 914578 5099
647           0 83 4722 0 1412881037 916203 3108
648           0 84 4142 0 1412881037 918023 2333
649           0 85 2465 0 1412881037 919759 740
650
651       In this example, transaction 82 was late, because its latency (6.173
652       ms) was over the 5 ms limit. The next two transactions were skipped,
653       because they were already late before they were even started.
654
655       When running a long test on hardware that can handle a lot of
656       transactions, the log files can become very large. The --sampling-rate
657       option can be used to log only a random sample of transactions.
658
659   Aggregated Logging
660       With the --aggregate-interval option, a different format is used for
661       the log files:
662
663           interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency [ sum_lag sum_lag_2 min_lag max_lag [ skipped ] ]
664
665       where interval_start is the start of the interval (as a Unix epoch time
666       stamp), num_transactions is the number of transactions within the
667       interval, sum_latency is the sum of the transaction latencies within
668       the interval, sum_latency_2 is the sum of squares of the transaction
669       latencies within the interval, min_latency is the minimum latency
670       within the interval, and max_latency is the maximum latency within the
671       interval. The next fields, sum_lag, sum_lag_2, min_lag, and max_lag,
672       are only present if the --rate option is used. They provide statistics
673       about the time each transaction had to wait for the previous one to
674       finish, i.e. the difference between each transaction's scheduled start
675       time and the time it actually started. The very last field, skipped, is
676       only present if the --latency-limit option is used, too. It counts the
677       number of transactions skipped because they would have started too
678       late. Each transaction is counted in the interval when it was
679       committed.
680
681       Here is some example output:
682
683           1345828501 5601 1542744 483552416 61 2573
684           1345828503 7884 1979812 565806736 60 1479
685           1345828505 7208 1979422 567277552 59 1391
686           1345828507 7685 1980268 569784714 60 1398
687           1345828509 7073 1979779 573489941 236 1411
688
689       Notice that while the plain (unaggregated) log file shows which script
690       was used for each transaction, the aggregated log does not. Therefore
691       if you need per-script data, you need to aggregate the data on your
692       own.
693
694   Per-Statement Latencies
695       With the -r option, pgbench collects the elapsed transaction time of
696       each statement executed by every client. It then reports an average of
697       those values, referred to as the latency for each statement, after the
698       benchmark has finished.
699
700       For the default script, the output will look similar to this:
701
702           starting vacuum...end.
703           transaction type: <builtin: TPC-B (sort of)>
704           scaling factor: 1
705           query mode: simple
706           number of clients: 10
707           number of threads: 1
708           number of transactions per client: 1000
709           number of transactions actually processed: 10000/10000
710           latency average = 15.844 ms
711           latency stddev = 2.715 ms
712           tps = 618.764555 (including connections establishing)
713           tps = 622.977698 (excluding connections establishing)
714           script statistics:
715            - statement latencies in milliseconds:
716                   0.002  \set aid random(1, 100000 * :scale)
717                   0.005  \set bid random(1, 1 * :scale)
718                   0.002  \set tid random(1, 10 * :scale)
719                   0.001  \set delta random(-5000, 5000)
720                   0.326  BEGIN;
721                   0.603  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
722                   0.454  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
723                   5.528  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
724                   7.335  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
725                   0.371  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
726                   1.212  END;
727
728       If multiple script files are specified, the averages are reported
729       separately for each script file.
730
731       Note that collecting the additional timing information needed for
732       per-statement latency computation adds some overhead. This will slow
733       average execution speed and lower the computed TPS. The amount of
734       slowdown varies significantly depending on platform and hardware.
735       Comparing average TPS values with and without latency reporting enabled
736       is a good way to measure if the timing overhead is significant.
737
738   Good Practices
739       It is very easy to use pgbench to produce completely meaningless
740       numbers. Here are some guidelines to help you get useful results.
741
742       In the first place, never believe any test that runs for only a few
743       seconds. Use the -t or -T option to make the run last at least a few
744       minutes, so as to average out noise. In some cases you could need hours
745       to get numbers that are reproducible. It's a good idea to try the test
746       run a few times, to find out if your numbers are reproducible or not.
747
748       For the default TPC-B-like test scenario, the initialization scale
749       factor (-s) should be at least as large as the largest number of
750       clients you intend to test (-c); else you'll mostly be measuring update
751       contention. There are only -s rows in the pgbench_branches table, and
752       every transaction wants to update one of them, so -c values in excess
753       of -s will undoubtedly result in lots of transactions blocked waiting
754       for other transactions.
755
756       The default test scenario is also quite sensitive to how long it's been
757       since the tables were initialized: accumulation of dead rows and dead
758       space in the tables changes the results. To understand the results you
759       must keep track of the total number of updates and when vacuuming
760       happens. If autovacuum is enabled it can result in unpredictable
761       changes in measured performance.
762
763       A limitation of pgbench is that it can itself become the bottleneck
764       when trying to test a large number of client sessions. This can be
765       alleviated by running pgbench on a different machine from the database
766       server, although low network latency will be essential. It might even
767       be useful to run several pgbench instances concurrently, on several
768       client machines, against the same database server.
769
770   Security
771       If untrusted users have access to a database that has not adopted a
772       secure schema usage pattern, do not run pgbench in that database.
773       pgbench uses unqualified names and does not manipulate the search path.
774
775
776
777PostgreSQL 10.7                      2019                           PGBENCH(1)
Impressum