1PGBENCH(1) PostgreSQL 11.3 Documentation PGBENCH(1)
2
3
4
6 pgbench - run a benchmark test on PostgreSQL
7
9 pgbench -i [option...] [dbname]
10
11 pgbench [option...] [dbname]
12
14 pgbench is a simple program for running benchmark tests on PostgreSQL.
15 It runs the same sequence of SQL commands over and over, possibly in
16 multiple concurrent database sessions, and then calculates the average
17 transaction rate (transactions per second). By default, pgbench tests a
18 scenario that is loosely based on TPC-B, involving five SELECT, UPDATE,
19 and INSERT commands per transaction. However, it is easy to test other
20 cases by writing your own transaction script files.
21
22 Typical output from pgbench looks like:
23
24 transaction type: <builtin: TPC-B (sort of)>
25 scaling factor: 10
26 query mode: simple
27 number of clients: 10
28 number of threads: 1
29 number of transactions per client: 1000
30 number of transactions actually processed: 10000/10000
31 tps = 85.184871 (including connections establishing)
32 tps = 85.296346 (excluding connections establishing)
33
34 The first six lines report some of the most important parameter
35 settings. The next line reports the number of transactions completed
36 and intended (the latter being just the product of number of clients
37 and number of transactions per client); these will be equal unless the
38 run failed before completion. (In -T mode, only the actual number of
39 transactions is printed.) The last two lines report the number of
40 transactions per second, figured with and without counting the time to
41 start database sessions.
42
43 The default TPC-B-like transaction test requires specific tables to be
44 set up beforehand. pgbench should be invoked with the -i (initialize)
45 option to create and populate these tables. (When you are testing a
46 custom script, you don't need this step, but will instead need to do
47 whatever setup your test needs.) Initialization looks like:
48
49 pgbench -i [ other-options ] dbname
50
51 where dbname is the name of the already-created database to test in.
52 (You may also need -h, -p, and/or -U options to specify how to connect
53 to the database server.)
54
55 Caution
56 pgbench -i creates four tables pgbench_accounts, pgbench_branches,
57 pgbench_history, and pgbench_tellers, destroying any existing
58 tables of these names. Be very careful to use another database if
59 you have tables having these names!
60
61 At the default “scale factor” of 1, the tables initially contain this
62 many rows:
63
64 table # of rows
65 ---------------------------------
66 pgbench_branches 1
67 pgbench_tellers 10
68 pgbench_accounts 100000
69 pgbench_history 0
70
71 You can (and, for most purposes, probably should) increase the number
72 of rows by using the -s (scale factor) option. The -F (fillfactor)
73 option might also be used at this point.
74
75 Once you have done the necessary setup, you can run your benchmark with
76 a command that doesn't include -i, that is
77
78 pgbench [ options ] dbname
79
80 In nearly all cases, you'll need some options to make a useful test.
81 The most important options are -c (number of clients), -t (number of
82 transactions), -T (time limit), and -f (specify a custom script file).
83 See below for a full list.
84
86 The following is divided into three subsections. Different options are
87 used during database initialization and while running benchmarks, but
88 some options are useful in both cases.
89
90 Initialization Options
91 pgbench accepts the following command-line initialization arguments:
92
93 -i
94 --initialize
95 Required to invoke initialization mode.
96
97 -I init_steps
98 --init-steps=init_steps
99 Perform just a selected set of the normal initialization steps.
100 init_steps specifies the initialization steps to be performed,
101 using one character per step. Each step is invoked in the specified
102 order. The default is dtgvp. The available steps are:
103
104 d (Drop)
105 Drop any existing pgbench tables.
106
107 t (create Tables)
108 Create the tables used by the standard pgbench scenario, namely
109 pgbench_accounts, pgbench_branches, pgbench_history, and
110 pgbench_tellers.
111
112 g (Generate data)
113 Generate data and load it into the standard tables, replacing
114 any data already present.
115
116 v (Vacuum)
117 Invoke VACUUM on the standard tables.
118
119 p (create Primary keys)
120 Create primary key indexes on the standard tables.
121
122 f (create Foreign keys)
123 Create foreign key constraints between the standard tables.
124 (Note that this step is not performed by default.)
125
126
127 -F fillfactor
128 --fillfactor=fillfactor
129 Create the pgbench_accounts, pgbench_tellers and pgbench_branches
130 tables with the given fillfactor. Default is 100.
131
132 -n
133 --no-vacuum
134 Perform no vacuuming during initialization. (This option suppresses
135 the v initialization step, even if it was specified in -I.)
136
137 -q
138 --quiet
139 Switch logging to quiet mode, producing only one progress message
140 per 5 seconds. The default logging prints one message each 100000
141 rows, which often outputs many lines per second (especially on good
142 hardware).
143
144 -s scale_factor
145 --scale=scale_factor
146 Multiply the number of rows generated by the scale factor. For
147 example, -s 100 will create 10,000,000 rows in the pgbench_accounts
148 table. Default is 1. When the scale is 20,000 or larger, the
149 columns used to hold account identifiers (aid columns) will switch
150 to using larger integers (bigint), in order to be big enough to
151 hold the range of account identifiers.
152
153 --foreign-keys
154 Create foreign key constraints between the standard tables. (This
155 option adds the f step to the initialization step sequence, if it
156 is not already present.)
157
158 --index-tablespace=index_tablespace
159 Create indexes in the specified tablespace, rather than the default
160 tablespace.
161
162 --tablespace=tablespace
163 Create tables in the specified tablespace, rather than the default
164 tablespace.
165
166 --unlogged-tables
167 Create all tables as unlogged tables, rather than permanent tables.
168
169 Benchmarking Options
170 pgbench accepts the following command-line benchmarking arguments:
171
172 -b scriptname[@weight]
173 --builtin=scriptname[@weight]
174 Add the specified built-in script to the list of executed scripts.
175 An optional integer weight after @ allows to adjust the probability
176 of drawing the script. If not specified, it is set to 1. Available
177 built-in scripts are: tpcb-like, simple-update and select-only.
178 Unambiguous prefixes of built-in names are accepted. With special
179 name list, show the list of built-in scripts and exit immediately.
180
181 -c clients
182 --client=clients
183 Number of clients simulated, that is, number of concurrent database
184 sessions. Default is 1.
185
186 -C
187 --connect
188 Establish a new connection for each transaction, rather than doing
189 it just once per client session. This is useful to measure the
190 connection overhead.
191
192 -d
193 --debug
194 Print debugging output.
195
196 -D varname=value
197 --define=varname=value
198 Define a variable for use by a custom script (see below). Multiple
199 -D options are allowed.
200
201 -f filename[@weight]
202 --file=filename[@weight]
203 Add a transaction script read from filename to the list of executed
204 scripts. An optional integer weight after @ allows to adjust the
205 probability of drawing the test. See below for details.
206
207 -j threads
208 --jobs=threads
209 Number of worker threads within pgbench. Using more than one thread
210 can be helpful on multi-CPU machines. Clients are distributed as
211 evenly as possible among available threads. Default is 1.
212
213 -l
214 --log
215 Write information about each transaction to a log file. See below
216 for details.
217
218 -L limit
219 --latency-limit=limit
220 Transaction which last more than limit milliseconds are counted and
221 reported separately, as late.
222
223 When throttling is used (--rate=...), transactions that lag behind
224 schedule by more than limit ms, and thus have no hope of meeting
225 the latency limit, are not sent to the server at all. They are
226 counted and reported separately as skipped.
227
228 -M querymode
229 --protocol=querymode
230 Protocol to use for submitting queries to the server:
231
232 · simple: use simple query protocol.
233
234 · extended: use extended query protocol.
235
236 · prepared: use extended query protocol with prepared statements.
237
238 The default is simple query protocol. (See Chapter 53 for more
239 information.)
240
241 -n
242 --no-vacuum
243 Perform no vacuuming before running the test. This option is
244 necessary if you are running a custom test scenario that does not
245 include the standard tables pgbench_accounts, pgbench_branches,
246 pgbench_history, and pgbench_tellers.
247
248 -N
249 --skip-some-updates
250 Run built-in simple-update script. Shorthand for -b simple-update.
251
252 -P sec
253 --progress=sec
254 Show progress report every sec seconds. The report includes the
255 time since the beginning of the run, the TPS since the last report,
256 and the transaction latency average and standard deviation since
257 the last report. Under throttling (-R), the latency is computed
258 with respect to the transaction scheduled start time, not the
259 actual transaction beginning time, thus it also includes the
260 average schedule lag time.
261
262 -r
263 --report-latencies
264 Report the average per-statement latency (execution time from the
265 perspective of the client) of each command after the benchmark
266 finishes. See below for details.
267
268 -R rate
269 --rate=rate
270 Execute transactions targeting the specified rate instead of
271 running as fast as possible (the default). The rate is given in
272 transactions per second. If the targeted rate is above the maximum
273 possible rate, the rate limit won't impact the results.
274
275 The rate is targeted by starting transactions along a
276 Poisson-distributed schedule time line. The expected start time
277 schedule moves forward based on when the client first started, not
278 when the previous transaction ended. That approach means that when
279 transactions go past their original scheduled end time, it is
280 possible for later ones to catch up again.
281
282 When throttling is active, the transaction latency reported at the
283 end of the run is calculated from the scheduled start times, so it
284 includes the time each transaction had to wait for the previous
285 transaction to finish. The wait time is called the schedule lag
286 time, and its average and maximum are also reported separately. The
287 transaction latency with respect to the actual transaction start
288 time, i.e. the time spent executing the transaction in the
289 database, can be computed by subtracting the schedule lag time from
290 the reported latency.
291
292 If --latency-limit is used together with --rate, a transaction can
293 lag behind so much that it is already over the latency limit when
294 the previous transaction ends, because the latency is calculated
295 from the scheduled start time. Such transactions are not sent to
296 the server, but are skipped altogether and counted separately.
297
298 A high schedule lag time is an indication that the system cannot
299 process transactions at the specified rate, with the chosen number
300 of clients and threads. When the average transaction execution time
301 is longer than the scheduled interval between each transaction,
302 each successive transaction will fall further behind, and the
303 schedule lag time will keep increasing the longer the test run is.
304 When that happens, you will have to reduce the specified
305 transaction rate.
306
307 -s scale_factor
308 --scale=scale_factor
309 Report the specified scale factor in pgbench's output. With the
310 built-in tests, this is not necessary; the correct scale factor
311 will be detected by counting the number of rows in the
312 pgbench_branches table. However, when testing only custom
313 benchmarks (-f option), the scale factor will be reported as 1
314 unless this option is used.
315
316 -S
317 --select-only
318 Run built-in select-only script. Shorthand for -b select-only.
319
320 -t transactions
321 --transactions=transactions
322 Number of transactions each client runs. Default is 10.
323
324 -T seconds
325 --time=seconds
326 Run the test for this many seconds, rather than a fixed number of
327 transactions per client. -t and -T are mutually exclusive.
328
329 -v
330 --vacuum-all
331 Vacuum all four standard tables before running the test. With
332 neither -n nor -v, pgbench will vacuum the pgbench_tellers and
333 pgbench_branches tables, and will truncate pgbench_history.
334
335 --aggregate-interval=seconds
336 Length of aggregation interval (in seconds). May be used only with
337 -l option. With this option, the log contains per-interval summary
338 data, as described below.
339
340 --log-prefix=prefix
341 Set the filename prefix for the log files created by --log. The
342 default is pgbench_log.
343
344 --progress-timestamp
345 When showing progress (option -P), use a timestamp (Unix epoch)
346 instead of the number of seconds since the beginning of the run.
347 The unit is in seconds, with millisecond precision after the dot.
348 This helps compare logs generated by various tools.
349
350 --random-seed=SEED
351 Set random generator seed. Seeds the system random number
352 generator, which then produces a sequence of initial generator
353 states, one for each thread. Values for SEED may be: time (the
354 default, the seed is based on the current time), rand (use a strong
355 random source, failing if none is available), or an unsigned
356 decimal integer value. The random generator is invoked explicitly
357 from a pgbench script (random... functions) or implicitly (for
358 instance option --rate uses it to schedule transactions). When
359 explicitly set, the value used for seeding is shown on the
360 terminal. Any value allowed for SEED may also be provided through
361 the environment variable PGBENCH_RANDOM_SEED. To ensure that the
362 provided seed impacts all possible uses, put this option first or
363 use the environment variable.
364
365 Setting the seed explicitly allows to reproduce a pgbench run
366 exactly, as far as random numbers are concerned. As the random
367 state is managed per thread, this means the exact same pgbench run
368 for an identical invocation if there is one client per thread and
369 there are no external or data dependencies. From a statistical
370 viewpoint reproducing runs exactly is a bad idea because it can
371 hide the performance variability or improve performance unduly,
372 e.g. by hitting the same pages as a previous run. However, it may
373 also be of great help for debugging, for instance re-running a
374 tricky case which leads to an error. Use wisely.
375
376 --sampling-rate=rate
377 Sampling rate, used when writing data into the log, to reduce the
378 amount of log generated. If this option is given, only the
379 specified fraction of transactions are logged. 1.0 means all
380 transactions will be logged, 0.05 means only 5% of the transactions
381 will be logged.
382
383 Remember to take the sampling rate into account when processing the
384 log file. For example, when computing TPS values, you need to
385 multiply the numbers accordingly (e.g. with 0.01 sample rate,
386 you'll only get 1/100 of the actual TPS).
387
388 Common Options
389 pgbench accepts the following command-line common arguments:
390
391 -h hostname
392 --host=hostname
393 The database server's host name
394
395 -p port
396 --port=port
397 The database server's port number
398
399 -U login
400 --username=login
401 The user name to connect as
402
403 -V
404 --version
405 Print the pgbench version and exit.
406
407 -?
408 --help
409 Show help about pgbench command line arguments, and exit.
410
412 What is the “Transaction” Actually Performed in pgbench?
413 pgbench executes test scripts chosen randomly from a specified list.
414 They include built-in scripts with -b and user-provided custom scripts
415 with -f. Each script may be given a relative weight specified after a @
416 so as to change its drawing probability. The default weight is 1.
417 Scripts with a weight of 0 are ignored.
418
419 The default built-in transaction script (also invoked with -b
420 tpcb-like) issues seven commands per transaction over randomly chosen
421 aid, tid, bid and balance. The scenario is inspired by the TPC-B
422 benchmark, but is not actually TPC-B, hence the name.
423
424 1. BEGIN;
425
426 2. UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid
427 = :aid;
428
429 3. SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
430
431 4. UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid =
432 :tid;
433
434 5. UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid
435 = :bid;
436
437 6. INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES
438 (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
439
440 7. END;
441
442 If you select the simple-update built-in (also -N), steps 4 and 5
443 aren't included in the transaction. This will avoid update contention
444 on these tables, but it makes the test case even less like TPC-B.
445
446 If you select the select-only built-in (also -S), only the SELECT is
447 issued.
448
449 Custom Scripts
450 pgbench has support for running custom benchmark scenarios by replacing
451 the default transaction script (described above) with a transaction
452 script read from a file (-f option). In this case a “transaction”
453 counts as one execution of a script file.
454
455 A script file contains one or more SQL commands terminated by
456 semicolons. Empty lines and lines beginning with -- are ignored. Script
457 files can also contain “meta commands”, which are interpreted by
458 pgbench itself, as described below.
459
460 Note
461 Before PostgreSQL 9.6, SQL commands in script files were terminated
462 by newlines, and so they could not be continued across lines. Now a
463 semicolon is required to separate consecutive SQL commands (though
464 a SQL command does not need one if it is followed by a meta
465 command). If you need to create a script file that works with both
466 old and new versions of pgbench, be sure to write each SQL command
467 on a single line ending with a semicolon.
468
469 There is a simple variable-substitution facility for script files.
470 Variable names must consist of letters (including non-Latin letters),
471 digits, and underscores. Variables can be set by the command-line -D
472 option, explained above, or by the meta commands explained below. In
473 addition to any variables preset by -D command-line options, there are
474 a few variables that are preset automatically, listed in Table 241. A
475 value specified for these variables using -D takes precedence over the
476 automatic presets. Once set, a variable's value can be inserted into a
477 SQL command by writing :variablename. When running more than one client
478 session, each session has its own set of variables.
479
480 Table 241. Automatic Variables
481 ┌─────────────┬────────────────────────────┐
482 │Variable │ Description │
483 ├─────────────┼────────────────────────────┤
484 │client_id │ unique number identifying │
485 │ │ the client session (starts │
486 │ │ from zero) │
487 ├─────────────┼────────────────────────────┤
488 │default_seed │ seed used in hash │
489 │ │ functions by default │
490 ├─────────────┼────────────────────────────┤
491 │random_seed │ random generator seed │
492 │ │ (unless overwritten with │
493 │ │ -D) │
494 ├─────────────┼────────────────────────────┤
495 │scale │ current scale factor │
496 └─────────────┴────────────────────────────┘
497
498 Script file meta commands begin with a backslash (\) and normally
499 extend to the end of the line, although they can be continued to
500 additional lines by writing backslash-return. Arguments to a meta
501 command are separated by white space. These meta commands are
502 supported:
503
504 \if expression
505 \elif expression
506 \else
507 \endif
508 This group of commands implements nestable conditional blocks,
509 similarly to psql's \if expression. Conditional expressions are
510 identical to those with \set, with non-zero values interpreted as
511 true.
512
513 \set varname expression
514 Sets variable varname to a value calculated from expression. The
515 expression may contain the NULL constant, Boolean constants TRUE
516 and FALSE, integer constants such as 5432, double constants such as
517 3.14159, references to variables :variablename, operators with
518 their usual SQL precedence and associativity, function calls, SQL
519 CASE generic conditional expressions and parentheses.
520
521 Functions and most operators return NULL on NULL input.
522
523 For conditional purposes, non zero numerical values are TRUE, zero
524 numerical values and NULL are FALSE.
525
526 When no final ELSE clause is provided to a CASE, the default value
527 is NULL.
528
529 Examples:
530
531 \set ntellers 10 * :scale
532 \set aid (1021 * random(1, 100000 * :scale)) % \
533 (100000 * :scale) + 1
534 \set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END
535
536 \sleep number [ us | ms | s ]
537 Causes script execution to sleep for the specified duration in
538 microseconds (us), milliseconds (ms) or seconds (s). If the unit is
539 omitted then seconds are the default. number can be either an
540 integer constant or a :variablename reference to a variable having
541 an integer value.
542
543 Example:
544
545 \sleep 10 ms
546
547 \setshell varname command [ argument ... ]
548 Sets variable varname to the result of the shell command command
549 with the given argument(s). The command must return an integer
550 value through its standard output.
551
552 command and each argument can be either a text constant or a
553 :variablename reference to a variable. If you want to use an
554 argument starting with a colon, write an additional colon at the
555 beginning of argument.
556
557 Example:
558
559 \setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
560
561 \shell command [ argument ... ]
562 Same as \setshell, but the result of the command is discarded.
563
564 Example:
565
566 \shell command literal_argument :variable ::literal_starting_with_colon
567
568 Built-In Operators
569 The arithmetic, bitwise, comparison and logical operators listed in
570 Table 242 are built into pgbench and may be used in expressions
571 appearing in \set.
572
573 Table 242. pgbench Operators by increasing precedence
574 ┌──────────────────┬──────────────────┬───────────┬────────┐
575 │Operator │ Description │ Example │ Result │
576 ├──────────────────┼──────────────────┼───────────┼────────┤
577 │OR │ logical or │ 5 or 0 │ TRUE │
578 ├──────────────────┼──────────────────┼───────────┼────────┤
579 │AND │ logical and │ 3 and 0 │ FALSE │
580 ├──────────────────┼──────────────────┼───────────┼────────┤
581 │NOT │ logical not │ not false │ TRUE │
582 ├──────────────────┼──────────────────┼───────────┼────────┤
583 │IS [NOT] │ value tests │ 1 is null │ FALSE │
584 │(NULL|TRUE|FALSE) │ │ │ │
585 ├──────────────────┼──────────────────┼───────────┼────────┤
586 │ISNULL|NOTNULL │ null tests │ 1 notnull │ TRUE │
587 ├──────────────────┼──────────────────┼───────────┼────────┤
588 │= │ is equal │ 5 = 4 │ FALSE │
589 ├──────────────────┼──────────────────┼───────────┼────────┤
590 │<> │ is not equal │ 5 <> 4 │ TRUE │
591 ├──────────────────┼──────────────────┼───────────┼────────┤
592 │!= │ is not equal │ 5 != 5 │ FALSE │
593 ├──────────────────┼──────────────────┼───────────┼────────┤
594 │< │ lower than │ 5 < 4 │ FALSE │
595 ├──────────────────┼──────────────────┼───────────┼────────┤
596 │<= │ lower or equal │ 5 <= 4 │ FALSE │
597 ├──────────────────┼──────────────────┼───────────┼────────┤
598 │> │ greater than │ 5 > 4 │ TRUE │
599 ├──────────────────┼──────────────────┼───────────┼────────┤
600 │>= │ greater or equal │ 5 >= 4 │ TRUE │
601 ├──────────────────┼──────────────────┼───────────┼────────┤
602 │| │ integer bitwise │ 1 | 2 │ 3 │
603 │ │ OR │ │ │
604 ├──────────────────┼──────────────────┼───────────┼────────┤
605 │# │ integer bitwise │ 1 # 3 │ 2 │
606 │ │ XOR │ │ │
607 ├──────────────────┼──────────────────┼───────────┼────────┤
608 │& │ integer bitwise │ 1 & 3 │ 1 │
609 │ │ AND │ │ │
610 ├──────────────────┼──────────────────┼───────────┼────────┤
611 │~ │ integer bitwise │ ~ 1 │ -2 │
612 │ │ NOT │ │ │
613 ├──────────────────┼──────────────────┼───────────┼────────┤
614 │<< │ integer bitwise │ 1 << 2 │ 4 │
615 │ │ shift left │ │ │
616 ├──────────────────┼──────────────────┼───────────┼────────┤
617 │>> │ integer bitwise │ 8 >> 2 │ 2 │
618 │ │ shift right │ │ │
619 ├──────────────────┼──────────────────┼───────────┼────────┤
620 │+ │ addition │ 5 + 4 │ 9 │
621 ├──────────────────┼──────────────────┼───────────┼────────┤
622 │- │ subtraction │ 3 - 2.0 │ 1.0 │
623 ├──────────────────┼──────────────────┼───────────┼────────┤
624 │* │ multiplication │ 5 * 4 │ 20 │
625 ├──────────────────┼──────────────────┼───────────┼────────┤
626 │/ │ division │ 5 / 3 │ 1 │
627 │ │ (integer │ │ │
628 │ │ truncates the │ │ │
629 │ │ results) │ │ │
630 ├──────────────────┼──────────────────┼───────────┼────────┤
631 │% │ modulo │ 3 % 2 │ 1 │
632 ├──────────────────┼──────────────────┼───────────┼────────┤
633 │- │ opposite │ - 2.0 │ -2.0 │
634 └──────────────────┴──────────────────┴───────────┴────────┘
635
636 Built-In Functions
637 The functions listed in Table 243 are built into pgbench and may be
638 used in expressions appearing in \set.
639
640 Table 243. pgbench Functions
641 ┌───────────────────────┬───────────────┬───────────────────────────┬───────────────────────┬────────────────────────┐
642 │Function │ Return Type │ Description │ Example │ Result │
643 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
644 │abs(a) │ same as a │ absolute │ abs(-17) │ 17 │
645 │ │ │ value │ │ │
646 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
647 │debug(a) │ same as a │ print a to │ debug(5432.1) │ 5432.1 │
648 │ │ │ stderr, │ │ │
649 │ │ │ and │ │ │
650 │ │ │ return a │ │ │
651 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
652 │double(i) │ double │ cast to │ double(5432) │ 5432.0 │
653 │ │ │ double │ │ │
654 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
655 │exp(x) │ double │ exponential │ exp(1.0) │ 2.718281828459045 │
656 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
657 │greatest(a [, │ double if any │ largest value │ greatest(5, │ 5 │
658 │... ] ) │ a is double, │ among │ 4, 3, 2) │ │
659 │ │ else integer │ arguments │ │ │
660 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
661 │hash(a [, │ integer │ alias for │ hash(10, │ -5817877081768721676 │
662 │seed ] ) │ │ hash_murmur2() │ 5432) │ │
663 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
664 │hash_fnv1a(a │ integer │ FNV-1a hash │ hash_fnv1a(10, │ -7793829335365542153 │
665 │[, seed ] ) │ │ │ 5432) │ │
666 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
667 │hash_murmur2(a │ integer │ MurmurHash2 │ hash_murmur2(10, │ -5817877081768721676 │
668 │[, seed ] ) │ │ hash │ 5432) │ │
669 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
670 │int(x) │ integer │ cast to int │ int(5.4 + 3.8) │ 9 │
671 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
672 │least(a [, ... │ double if any │ smallest value │ least(5, 4, 3, │ 2.1 │
673 │] ) │ a is double, │ among │ 2.1) │ │
674 │ │ else integer │ arguments │ │ │
675 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
676 │ln(x) │ double │ natural │ ln(2.718281828459045) │ 1.0 │
677 │ │ │ logarithm │ │ │
678 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
679 │mod(i, j) │ integer │ modulo │ mod(54, 32) │ 22 │
680 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
681 │pi() │ double │ value of the │ pi() │ 3.14159265358979323846 │
682 │ │ │ constant PI │ │ │
683 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
684 │pow(x, y), │ double │ exponentiation │ pow(2.0, 10), │ 1024.0 │
685 │power(x, y) │ │ │ power(2.0, 10) │ │
686 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
687 │random(lb, ub) │ integer │ uniformly-distributed │ random(1, 10) │ an integer between 1 │
688 │ │ │ random integer │ │ and 10 │
689 │ │ │ in [lb, ub] │ │ │
690 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
691 │random_exponential(lb, │ integer │ exponentially-distributed │ random_exponential(1, │ an integer between 1 │
692 │ub, parameter) │ │ random integer in │ 10, 3.0) │ and 10 │
693 │ │ │ [lb, ub], │ │ │
694 │ │ │ see │ │ │
695 │ │ │ below │ │ │
696 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
697 │random_gaussian(lb, │ integer │ Gaussian-distributed │ random_gaussian(1, │ an integer between 1 │
698 │ub, parameter) │ │ random integer in [lb, │ 10, 2.5) │ and 10 │
699 │ │ │ ub], │ │ │
700 │ │ │ see below │ │ │
701 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
702 │random_zipfian(lb, ub, │ integer │ Zipfian-distributed │ random_zipfian(1, 10, │ an integer between 1 │
703 │parameter) │ │ random integer in [lb, │ 1.5) │ and 10 │
704 │ │ │ ub], │ │ │
705 │ │ │ see below │ │ │
706 ├───────────────────────┼───────────────┼───────────────────────────┼───────────────────────┼────────────────────────┤
707 │sqrt(x) │ double │ square root │ sqrt(2.0) │ 1.414213562 │
708 └───────────────────────┴───────────────┴───────────────────────────┴───────────────────────┴────────────────────────┘
709
710 The random function generates values using a uniform distribution, that
711 is all the values are drawn within the specified range with equal
712 probability. The random_exponential, random_gaussian and random_zipfian
713 functions require an additional double parameter which determines the
714 precise shape of the distribution.
715
716 · For an exponential distribution, parameter controls the
717 distribution by truncating a quickly-decreasing exponential
718 distribution at parameter, and then projecting onto integers
719 between the bounds. To be precise, with
720
721 f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter))
722
723 Then value i between min and max inclusive is drawn with
724 probability: f(i) - f(i + 1).
725
726 Intuitively, the larger the parameter, the more frequently values
727 close to min are accessed, and the less frequently values close to
728 max are accessed. The closer to 0 parameter is, the flatter (more
729 uniform) the access distribution. A crude approximation of the
730 distribution is that the most frequent 1% values in the range,
731 close to min, are drawn parameter% of the time. The parameter value
732 must be strictly positive.
733
734 · For a Gaussian distribution, the interval is mapped onto a standard
735 normal distribution (the classical bell-shaped Gaussian curve)
736 truncated at -parameter on the left and +parameter on the right.
737 Values in the middle of the interval are more likely to be drawn.
738 To be precise, if PHI(x) is the cumulative distribution function of
739 the standard normal distribution, with mean mu defined as (max +
740 min) / 2.0, with
741
742 f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
743 (2.0 * PHI(parameter) - 1)
744
745 then value i between min and max inclusive is drawn with
746 probability: f(i + 0.5) - f(i - 0.5). Intuitively, the larger the
747 parameter, the more frequently values close to the middle of the
748 interval are drawn, and the less frequently values close to the min
749 and max bounds. About 67% of values are drawn from the middle 1.0 /
750 parameter, that is a relative 0.5 / parameter around the mean, and
751 95% in the middle 2.0 / parameter, that is a relative 1.0 /
752 parameter around the mean; for instance, if parameter is 4.0, 67%
753 of values are drawn from the middle quarter (1.0 / 4.0) of the
754 interval (i.e. from 3.0 / 8.0 to 5.0 / 8.0) and 95% from the middle
755 half (2.0 / 4.0) of the interval (second and third quartiles). The
756 minimum parameter is 2.0 for performance of the Box-Muller
757 transform.
758
759 · random_zipfian generates an approximated bounded Zipfian
760 distribution. For parameter in (0, 1), an approximated algorithm is
761 taken from "Quickly Generating Billion-Record Synthetic Databases",
762 Jim Gray et al, SIGMOD 1994. For parameter in (1, 1000), a
763 rejection method is used, based on "Non-Uniform Random Variate
764 Generation", Luc Devroye, p. 550-551, Springer 1986. The
765 distribution is not defined when the parameter's value is 1.0. The
766 function's performance is poor for parameter values close and above
767 1.0 and on a small range.
768
769 parameter defines how skewed the distribution is. The larger the
770 parameter, the more frequently values closer to the beginning of
771 the interval are drawn. The closer to 0 parameter is, the flatter
772 (more uniform) the output distribution. The distribution is such
773 that, assuming the range starts from 1, the ratio of the
774 probability of drawing k versus drawing k+1 is
775 ((k+1)/k)**parameter. For example, random_zipfian(1, ..., 2.5)
776 produces the value 1 about (2/1)**2.5 = 5.66 times more frequently
777 than 2, which itself is produced (3/2)*2.5 = 2.76 times more
778 frequently than 3, and so on.
779
780 Hash functions hash, hash_murmur2 and hash_fnv1a accept an input value
781 and an optional seed parameter. In case the seed isn't provided the
782 value of :default_seed is used, which is initialized randomly unless
783 set by the command-line -D option. Hash functions can be used to
784 scatter the distribution of random functions such as random_zipfian or
785 random_exponential. For instance, the following pgbench script
786 simulates possible real world workload typical for social media and
787 blogging platforms where few accounts generate excessive load:
788
789 \set r random_zipfian(0, 100000000, 1.07)
790 \set k abs(hash(:r)) % 1000000
791
792 In some cases several distinct distributions are needed which don't
793 correlate with each other and this is when implicit seed parameter
794 comes in handy:
795
796 \set k1 abs(hash(:r, :default_seed + 123)) % 1000000
797 \set k2 abs(hash(:r, :default_seed + 321)) % 1000000
798
799 As an example, the full definition of the built-in TPC-B-like
800 transaction is:
801
802 \set aid random(1, 100000 * :scale)
803 \set bid random(1, 1 * :scale)
804 \set tid random(1, 10 * :scale)
805 \set delta random(-5000, 5000)
806 BEGIN;
807 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
808 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
809 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
810 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
811 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
812 END;
813
814 This script allows each iteration of the transaction to reference
815 different, randomly-chosen rows. (This example also shows why it's
816 important for each client session to have its own variables — otherwise
817 they'd not be independently touching different rows.)
818
819 Per-Transaction Logging
820 With the -l option (but without the --aggregate-interval option),
821 pgbench writes information about each transaction to a log file. The
822 log file will be named prefix.nnn, where prefix defaults to
823 pgbench_log, and nnn is the PID of the pgbench process. The prefix can
824 be changed by using the --log-prefix option. If the -j option is 2 or
825 higher, so that there are multiple worker threads, each will have its
826 own log file. The first worker will use the same name for its log file
827 as in the standard single worker case. The additional log files for the
828 other workers will be named prefix.nnn.mmm, where mmm is a sequential
829 number for each worker starting with 1.
830
831 The format of the log is:
832
833 client_id transaction_no time script_no time_epoch time_us [ schedule_lag ]
834
835 where client_id indicates which client session ran the transaction,
836 transaction_no counts how many transactions have been run by that
837 session, time is the total elapsed transaction time in microseconds,
838 script_no identifies which script file was used (useful when multiple
839 scripts were specified with -f or -b), and time_epoch/time_us are a
840 Unix-epoch time stamp and an offset in microseconds (suitable for
841 creating an ISO 8601 time stamp with fractional seconds) showing when
842 the transaction completed. The schedule_lag field is the difference
843 between the transaction's scheduled start time, and the time it
844 actually started, in microseconds. It is only present when the --rate
845 option is used. When both --rate and --latency-limit are used, the time
846 for a skipped transaction will be reported as skipped.
847
848 Here is a snippet of a log file generated in a single-client run:
849
850 0 199 2241 0 1175850568 995598
851 0 200 2465 0 1175850568 998079
852 0 201 2513 0 1175850569 608
853 0 202 2038 0 1175850569 2663
854
855 Another example with --rate=100 and --latency-limit=5 (note the
856 additional schedule_lag column):
857
858 0 81 4621 0 1412881037 912698 3005
859 0 82 6173 0 1412881037 914578 4304
860 0 83 skipped 0 1412881037 914578 5217
861 0 83 skipped 0 1412881037 914578 5099
862 0 83 4722 0 1412881037 916203 3108
863 0 84 4142 0 1412881037 918023 2333
864 0 85 2465 0 1412881037 919759 740
865
866 In this example, transaction 82 was late, because its latency (6.173
867 ms) was over the 5 ms limit. The next two transactions were skipped,
868 because they were already late before they were even started.
869
870 When running a long test on hardware that can handle a lot of
871 transactions, the log files can become very large. The --sampling-rate
872 option can be used to log only a random sample of transactions.
873
874 Aggregated Logging
875 With the --aggregate-interval option, a different format is used for
876 the log files:
877
878 interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency [ sum_lag sum_lag_2 min_lag max_lag [ skipped ] ]
879
880 where interval_start is the start of the interval (as a Unix epoch time
881 stamp), num_transactions is the number of transactions within the
882 interval, sum_latency is the sum of the transaction latencies within
883 the interval, sum_latency_2 is the sum of squares of the transaction
884 latencies within the interval, min_latency is the minimum latency
885 within the interval, and max_latency is the maximum latency within the
886 interval. The next fields, sum_lag, sum_lag_2, min_lag, and max_lag,
887 are only present if the --rate option is used. They provide statistics
888 about the time each transaction had to wait for the previous one to
889 finish, i.e. the difference between each transaction's scheduled start
890 time and the time it actually started. The very last field, skipped, is
891 only present if the --latency-limit option is used, too. It counts the
892 number of transactions skipped because they would have started too
893 late. Each transaction is counted in the interval when it was
894 committed.
895
896 Here is some example output:
897
898 1345828501 5601 1542744 483552416 61 2573
899 1345828503 7884 1979812 565806736 60 1479
900 1345828505 7208 1979422 567277552 59 1391
901 1345828507 7685 1980268 569784714 60 1398
902 1345828509 7073 1979779 573489941 236 1411
903
904 Notice that while the plain (unaggregated) log file shows which script
905 was used for each transaction, the aggregated log does not. Therefore
906 if you need per-script data, you need to aggregate the data on your
907 own.
908
909 Per-Statement Latencies
910 With the -r option, pgbench collects the elapsed transaction time of
911 each statement executed by every client. It then reports an average of
912 those values, referred to as the latency for each statement, after the
913 benchmark has finished.
914
915 For the default script, the output will look similar to this:
916
917 starting vacuum...end.
918 transaction type: <builtin: TPC-B (sort of)>
919 scaling factor: 1
920 query mode: simple
921 number of clients: 10
922 number of threads: 1
923 number of transactions per client: 1000
924 number of transactions actually processed: 10000/10000
925 latency average = 15.844 ms
926 latency stddev = 2.715 ms
927 tps = 618.764555 (including connections establishing)
928 tps = 622.977698 (excluding connections establishing)
929 statement latencies in milliseconds:
930 0.002 \set aid random(1, 100000 * :scale)
931 0.005 \set bid random(1, 1 * :scale)
932 0.002 \set tid random(1, 10 * :scale)
933 0.001 \set delta random(-5000, 5000)
934 0.326 BEGIN;
935 0.603 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
936 0.454 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
937 5.528 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
938 7.335 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
939 0.371 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
940 1.212 END;
941
942 If multiple script files are specified, the averages are reported
943 separately for each script file.
944
945 Note that collecting the additional timing information needed for
946 per-statement latency computation adds some overhead. This will slow
947 average execution speed and lower the computed TPS. The amount of
948 slowdown varies significantly depending on platform and hardware.
949 Comparing average TPS values with and without latency reporting enabled
950 is a good way to measure if the timing overhead is significant.
951
952 Good Practices
953 It is very easy to use pgbench to produce completely meaningless
954 numbers. Here are some guidelines to help you get useful results.
955
956 In the first place, never believe any test that runs for only a few
957 seconds. Use the -t or -T option to make the run last at least a few
958 minutes, so as to average out noise. In some cases you could need hours
959 to get numbers that are reproducible. It's a good idea to try the test
960 run a few times, to find out if your numbers are reproducible or not.
961
962 For the default TPC-B-like test scenario, the initialization scale
963 factor (-s) should be at least as large as the largest number of
964 clients you intend to test (-c); else you'll mostly be measuring update
965 contention. There are only -s rows in the pgbench_branches table, and
966 every transaction wants to update one of them, so -c values in excess
967 of -s will undoubtedly result in lots of transactions blocked waiting
968 for other transactions.
969
970 The default test scenario is also quite sensitive to how long it's been
971 since the tables were initialized: accumulation of dead rows and dead
972 space in the tables changes the results. To understand the results you
973 must keep track of the total number of updates and when vacuuming
974 happens. If autovacuum is enabled it can result in unpredictable
975 changes in measured performance.
976
977 A limitation of pgbench is that it can itself become the bottleneck
978 when trying to test a large number of client sessions. This can be
979 alleviated by running pgbench on a different machine from the database
980 server, although low network latency will be essential. It might even
981 be useful to run several pgbench instances concurrently, on several
982 client machines, against the same database server.
983
984 Security
985 If untrusted users have access to a database that has not adopted a
986 secure schema usage pattern, do not run pgbench in that database.
987 pgbench uses unqualified names and does not manipulate the search path.
988
989
990
991PostgreSQL 11.3 2019 PGBENCH(1)