1PGBENCH(1) PostgreSQL 14.3 Documentation PGBENCH(1)
2
3
4
6 pgbench - run a benchmark test on PostgreSQL
7
9 pgbench -i [option...] [dbname]
10
11 pgbench [option...] [dbname]
12
14 pgbench is a simple program for running benchmark tests on PostgreSQL.
15 It runs the same sequence of SQL commands over and over, possibly in
16 multiple concurrent database sessions, and then calculates the average
17 transaction rate (transactions per second). By default, pgbench tests a
18 scenario that is loosely based on TPC-B, involving five SELECT, UPDATE,
19 and INSERT commands per transaction. However, it is easy to test other
20 cases by writing your own transaction script files.
21
22 Typical output from pgbench looks like:
23
24 transaction type: <builtin: TPC-B (sort of)>
25 scaling factor: 10
26 query mode: simple
27 number of clients: 10
28 number of threads: 1
29 number of transactions per client: 1000
30 number of transactions actually processed: 10000/10000
31 latency average = 11.013 ms
32 latency stddev = 7.351 ms
33 initial connection time = 45.758 ms
34 tps = 896.967014 (without initial connection time)
35
36 The first six lines report some of the most important parameter
37 settings. The next line reports the number of transactions completed
38 and intended (the latter being just the product of number of clients
39 and number of transactions per client); these will be equal unless the
40 run failed before completion. (In -T mode, only the actual number of
41 transactions is printed.) The last line reports the number of
42 transactions per second.
43
44 The default TPC-B-like transaction test requires specific tables to be
45 set up beforehand. pgbench should be invoked with the -i (initialize)
46 option to create and populate these tables. (When you are testing a
47 custom script, you don't need this step, but will instead need to do
48 whatever setup your test needs.) Initialization looks like:
49
50 pgbench -i [ other-options ] dbname
51
52 where dbname is the name of the already-created database to test in.
53 (You may also need -h, -p, and/or -U options to specify how to connect
54 to the database server.)
55
56 Caution
57 pgbench -i creates four tables pgbench_accounts, pgbench_branches,
58 pgbench_history, and pgbench_tellers, destroying any existing
59 tables of these names. Be very careful to use another database if
60 you have tables having these names!
61
62 At the default “scale factor” of 1, the tables initially contain this
63 many rows:
64
65 table # of rows
66 ---------------------------------
67 pgbench_branches 1
68 pgbench_tellers 10
69 pgbench_accounts 100000
70 pgbench_history 0
71
72 You can (and, for most purposes, probably should) increase the number
73 of rows by using the -s (scale factor) option. The -F (fillfactor)
74 option might also be used at this point.
75
76 Once you have done the necessary setup, you can run your benchmark with
77 a command that doesn't include -i, that is
78
79 pgbench [ options ] dbname
80
81 In nearly all cases, you'll need some options to make a useful test.
82 The most important options are -c (number of clients), -t (number of
83 transactions), -T (time limit), and -f (specify a custom script file).
84 See below for a full list.
85
87 The following is divided into three subsections. Different options are
88 used during database initialization and while running benchmarks, but
89 some options are useful in both cases.
90
91 Initialization Options
92 pgbench accepts the following command-line initialization arguments:
93
94 dbname
95 Specifies the name of the database to test in. If this is not
96 specified, the environment variable PGDATABASE is used. If that is
97 not set, the user name specified for the connection is used.
98
99 -i
100 --initialize
101 Required to invoke initialization mode.
102
103 -I init_steps
104 --init-steps=init_steps
105 Perform just a selected set of the normal initialization steps.
106 init_steps specifies the initialization steps to be performed,
107 using one character per step. Each step is invoked in the specified
108 order. The default is dtgvp. The available steps are:
109
110 d (Drop)
111 Drop any existing pgbench tables.
112
113 t (create Tables)
114 Create the tables used by the standard pgbench scenario, namely
115 pgbench_accounts, pgbench_branches, pgbench_history, and
116 pgbench_tellers.
117
118 g or G (Generate data, client-side or server-side)
119 Generate data and load it into the standard tables, replacing
120 any data already present.
121
122 With g (client-side data generation), data is generated in
123 pgbench client and then sent to the server. This uses the
124 client/server bandwidth extensively through a COPY. Using g
125 causes logging to print one message every 100,000 rows while
126 generating data for the pgbench_accounts table.
127
128 With G (server-side data generation), only small queries are
129 sent from the pgbench client and then data is actually
130 generated in the server. No significant bandwidth is required
131 for this variant, but the server will do more work. Using G
132 causes logging not to print any progress message while
133 generating data.
134
135 The default initialization behavior uses client-side data
136 generation (equivalent to g).
137
138 v (Vacuum)
139 Invoke VACUUM on the standard tables.
140
141 p (create Primary keys)
142 Create primary key indexes on the standard tables.
143
144 f (create Foreign keys)
145 Create foreign key constraints between the standard tables.
146 (Note that this step is not performed by default.)
147
148 -F fillfactor
149 --fillfactor=fillfactor
150 Create the pgbench_accounts, pgbench_tellers and pgbench_branches
151 tables with the given fillfactor. Default is 100.
152
153 -n
154 --no-vacuum
155 Perform no vacuuming during initialization. (This option suppresses
156 the v initialization step, even if it was specified in -I.)
157
158 -q
159 --quiet
160 Switch logging to quiet mode, producing only one progress message
161 per 5 seconds. The default logging prints one message each 100,000
162 rows, which often outputs many lines per second (especially on good
163 hardware).
164
165 This setting has no effect if G is specified in -I.
166
167 -s scale_factor
168 --scale=scale_factor
169 Multiply the number of rows generated by the scale factor. For
170 example, -s 100 will create 10,000,000 rows in the pgbench_accounts
171 table. Default is 1. When the scale is 20,000 or larger, the
172 columns used to hold account identifiers (aid columns) will switch
173 to using larger integers (bigint), in order to be big enough to
174 hold the range of account identifiers.
175
176 --foreign-keys
177 Create foreign key constraints between the standard tables. (This
178 option adds the f step to the initialization step sequence, if it
179 is not already present.)
180
181 --index-tablespace=index_tablespace
182 Create indexes in the specified tablespace, rather than the default
183 tablespace.
184
185 --partition-method=NAME
186 Create a partitioned pgbench_accounts table with NAME method.
187 Expected values are range or hash. This option requires that
188 --partitions is set to non-zero. If unspecified, default is range.
189
190 --partitions=NUM
191 Create a partitioned pgbench_accounts table with NUM partitions of
192 nearly equal size for the scaled number of accounts. Default is 0,
193 meaning no partitioning.
194
195 --tablespace=tablespace
196 Create tables in the specified tablespace, rather than the default
197 tablespace.
198
199 --unlogged-tables
200 Create all tables as unlogged tables, rather than permanent tables.
201
202 Benchmarking Options
203 pgbench accepts the following command-line benchmarking arguments:
204
205 -b scriptname[@weight]
206 --builtin=scriptname[@weight]
207 Add the specified built-in script to the list of scripts to be
208 executed. Available built-in scripts are: tpcb-like, simple-update
209 and select-only. Unambiguous prefixes of built-in names are
210 accepted. With the special name list, show the list of built-in
211 scripts and exit immediately.
212
213 Optionally, write an integer weight after @ to adjust the
214 probability of selecting this script versus other ones. The default
215 weight is 1. See below for details.
216
217 -c clients
218 --client=clients
219 Number of clients simulated, that is, number of concurrent database
220 sessions. Default is 1.
221
222 -C
223 --connect
224 Establish a new connection for each transaction, rather than doing
225 it just once per client session. This is useful to measure the
226 connection overhead.
227
228 -d
229 --debug
230 Print debugging output.
231
232 -D varname=value
233 --define=varname=value
234 Define a variable for use by a custom script (see below). Multiple
235 -D options are allowed.
236
237 -f filename[@weight]
238 --file=filename[@weight]
239 Add a transaction script read from filename to the list of scripts
240 to be executed.
241
242 Optionally, write an integer weight after @ to adjust the
243 probability of selecting this script versus other ones. The default
244 weight is 1. (To use a script file name that includes an @
245 character, append a weight so that there is no ambiguity, for
246 example filen@me@1.) See below for details.
247
248 -j threads
249 --jobs=threads
250 Number of worker threads within pgbench. Using more than one thread
251 can be helpful on multi-CPU machines. Clients are distributed as
252 evenly as possible among available threads. Default is 1.
253
254 -l
255 --log
256 Write information about each transaction to a log file. See below
257 for details.
258
259 -L limit
260 --latency-limit=limit
261 Transactions that last more than limit milliseconds are counted and
262 reported separately, as late.
263
264 When throttling is used (--rate=...), transactions that lag behind
265 schedule by more than limit ms, and thus have no hope of meeting
266 the latency limit, are not sent to the server at all. They are
267 counted and reported separately as skipped.
268
269 -M querymode
270 --protocol=querymode
271 Protocol to use for submitting queries to the server:
272
273 • simple: use simple query protocol.
274
275 • extended: use extended query protocol.
276
277 • prepared: use extended query protocol with prepared statements.
278
279 In the prepared mode, pgbench reuses the parse analysis result
280 starting from the second query iteration, so pgbench runs faster
281 than in other modes.
282
283 The default is simple query protocol. (See Chapter 53 for more
284 information.)
285
286 -n
287 --no-vacuum
288 Perform no vacuuming before running the test. This option is
289 necessary if you are running a custom test scenario that does not
290 include the standard tables pgbench_accounts, pgbench_branches,
291 pgbench_history, and pgbench_tellers.
292
293 -N
294 --skip-some-updates
295 Run built-in simple-update script. Shorthand for -b simple-update.
296
297 -P sec
298 --progress=sec
299 Show progress report every sec seconds. The report includes the
300 time since the beginning of the run, the TPS since the last report,
301 and the transaction latency average and standard deviation since
302 the last report. Under throttling (-R), the latency is computed
303 with respect to the transaction scheduled start time, not the
304 actual transaction beginning time, thus it also includes the
305 average schedule lag time.
306
307 -r
308 --report-latencies
309 Report the average per-statement latency (execution time from the
310 perspective of the client) of each command after the benchmark
311 finishes. See below for details.
312
313 -R rate
314 --rate=rate
315 Execute transactions targeting the specified rate instead of
316 running as fast as possible (the default). The rate is given in
317 transactions per second. If the targeted rate is above the maximum
318 possible rate, the rate limit won't impact the results.
319
320 The rate is targeted by starting transactions along a
321 Poisson-distributed schedule time line. The expected start time
322 schedule moves forward based on when the client first started, not
323 when the previous transaction ended. That approach means that when
324 transactions go past their original scheduled end time, it is
325 possible for later ones to catch up again.
326
327 When throttling is active, the transaction latency reported at the
328 end of the run is calculated from the scheduled start times, so it
329 includes the time each transaction had to wait for the previous
330 transaction to finish. The wait time is called the schedule lag
331 time, and its average and maximum are also reported separately. The
332 transaction latency with respect to the actual transaction start
333 time, i.e., the time spent executing the transaction in the
334 database, can be computed by subtracting the schedule lag time from
335 the reported latency.
336
337 If --latency-limit is used together with --rate, a transaction can
338 lag behind so much that it is already over the latency limit when
339 the previous transaction ends, because the latency is calculated
340 from the scheduled start time. Such transactions are not sent to
341 the server, but are skipped altogether and counted separately.
342
343 A high schedule lag time is an indication that the system cannot
344 process transactions at the specified rate, with the chosen number
345 of clients and threads. When the average transaction execution time
346 is longer than the scheduled interval between each transaction,
347 each successive transaction will fall further behind, and the
348 schedule lag time will keep increasing the longer the test run is.
349 When that happens, you will have to reduce the specified
350 transaction rate.
351
352 -s scale_factor
353 --scale=scale_factor
354 Report the specified scale factor in pgbench's output. With the
355 built-in tests, this is not necessary; the correct scale factor
356 will be detected by counting the number of rows in the
357 pgbench_branches table. However, when testing only custom
358 benchmarks (-f option), the scale factor will be reported as 1
359 unless this option is used.
360
361 -S
362 --select-only
363 Run built-in select-only script. Shorthand for -b select-only.
364
365 -t transactions
366 --transactions=transactions
367 Number of transactions each client runs. Default is 10.
368
369 -T seconds
370 --time=seconds
371 Run the test for this many seconds, rather than a fixed number of
372 transactions per client. -t and -T are mutually exclusive.
373
374 -v
375 --vacuum-all
376 Vacuum all four standard tables before running the test. With
377 neither -n nor -v, pgbench will vacuum the pgbench_tellers and
378 pgbench_branches tables, and will truncate pgbench_history.
379
380 --aggregate-interval=seconds
381 Length of aggregation interval (in seconds). May be used only with
382 -l option. With this option, the log contains per-interval summary
383 data, as described below.
384
385 --log-prefix=prefix
386 Set the filename prefix for the log files created by --log. The
387 default is pgbench_log.
388
389 --progress-timestamp
390 When showing progress (option -P), use a timestamp (Unix epoch)
391 instead of the number of seconds since the beginning of the run.
392 The unit is in seconds, with millisecond precision after the dot.
393 This helps compare logs generated by various tools.
394
395 --random-seed=seed
396 Set random generator seed. Seeds the system random number
397 generator, which then produces a sequence of initial generator
398 states, one for each thread. Values for seed may be: time (the
399 default, the seed is based on the current time), rand (use a strong
400 random source, failing if none is available), or an unsigned
401 decimal integer value. The random generator is invoked explicitly
402 from a pgbench script (random... functions) or implicitly (for
403 instance option --rate uses it to schedule transactions). When
404 explicitly set, the value used for seeding is shown on the
405 terminal. Any value allowed for seed may also be provided through
406 the environment variable PGBENCH_RANDOM_SEED. To ensure that the
407 provided seed impacts all possible uses, put this option first or
408 use the environment variable.
409
410 Setting the seed explicitly allows to reproduce a pgbench run
411 exactly, as far as random numbers are concerned. As the random
412 state is managed per thread, this means the exact same pgbench run
413 for an identical invocation if there is one client per thread and
414 there are no external or data dependencies. From a statistical
415 viewpoint reproducing runs exactly is a bad idea because it can
416 hide the performance variability or improve performance unduly,
417 e.g., by hitting the same pages as a previous run. However, it may
418 also be of great help for debugging, for instance re-running a
419 tricky case which leads to an error. Use wisely.
420
421 --sampling-rate=rate
422 Sampling rate, used when writing data into the log, to reduce the
423 amount of log generated. If this option is given, only the
424 specified fraction of transactions are logged. 1.0 means all
425 transactions will be logged, 0.05 means only 5% of the transactions
426 will be logged.
427
428 Remember to take the sampling rate into account when processing the
429 log file. For example, when computing TPS values, you need to
430 multiply the numbers accordingly (e.g., with 0.01 sample rate,
431 you'll only get 1/100 of the actual TPS).
432
433 --show-script=scriptname
434 Show the actual code of builtin script scriptname on stderr, and
435 exit immediately.
436
437 Common Options
438 pgbench also accepts the following common command-line arguments for
439 connection parameters:
440
441 -h hostname
442 --host=hostname
443 The database server's host name
444
445 -p port
446 --port=port
447 The database server's port number
448
449 -U login
450 --username=login
451 The user name to connect as
452
453 -V
454 --version
455 Print the pgbench version and exit.
456
457 -?
458 --help
459 Show help about pgbench command line arguments, and exit.
460
462 A successful run will exit with status 0. Exit status 1 indicates
463 static problems such as invalid command-line options. Errors during the
464 run such as database errors or problems in the script will result in
465 exit status 2. In the latter case, pgbench will print partial results.
466
468 PGDATABASE
469 PGHOST
470 PGPORT
471 PGUSER
472 Default connection parameters.
473
474 This utility, like most other PostgreSQL utilities, uses the
475 environment variables supported by libpq (see Section 34.15).
476
477 The environment variable PG_COLOR specifies whether to use color in
478 diagnostic messages. Possible values are always, auto and never.
479
481 What Is the “Transaction” Actually Performed in pgbench?
482 pgbench executes test scripts chosen randomly from a specified list.
483 The scripts may include built-in scripts specified with -b and
484 user-provided scripts specified with -f. Each script may be given a
485 relative weight specified after an @ so as to change its selection
486 probability. The default weight is 1. Scripts with a weight of 0 are
487 ignored.
488
489 The default built-in transaction script (also invoked with -b
490 tpcb-like) issues seven commands per transaction over randomly chosen
491 aid, tid, bid and delta. The scenario is inspired by the TPC-B
492 benchmark, but is not actually TPC-B, hence the name.
493
494 1. BEGIN;
495
496 2. UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid
497 = :aid;
498
499 3. SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
500
501 4. UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid =
502 :tid;
503
504 5. UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid
505 = :bid;
506
507 6. INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES
508 (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
509
510 7. END;
511
512 If you select the simple-update built-in (also -N), steps 4 and 5
513 aren't included in the transaction. This will avoid update contention
514 on these tables, but it makes the test case even less like TPC-B.
515
516 If you select the select-only built-in (also -S), only the SELECT is
517 issued.
518
519 Custom Scripts
520 pgbench has support for running custom benchmark scenarios by replacing
521 the default transaction script (described above) with a transaction
522 script read from a file (-f option). In this case a “transaction”
523 counts as one execution of a script file.
524
525 A script file contains one or more SQL commands terminated by
526 semicolons. Empty lines and lines beginning with -- are ignored. Script
527 files can also contain “meta commands”, which are interpreted by
528 pgbench itself, as described below.
529
530 Note
531 Before PostgreSQL 9.6, SQL commands in script files were terminated
532 by newlines, and so they could not be continued across lines. Now a
533 semicolon is required to separate consecutive SQL commands (though
534 an SQL command does not need one if it is followed by a meta
535 command). If you need to create a script file that works with both
536 old and new versions of pgbench, be sure to write each SQL command
537 on a single line ending with a semicolon.
538
539 There is a simple variable-substitution facility for script files.
540 Variable names must consist of letters (including non-Latin letters),
541 digits, and underscores, with the first character not being a digit.
542 Variables can be set by the command-line -D option, explained above, or
543 by the meta commands explained below. In addition to any variables
544 preset by -D command-line options, there are a few variables that are
545 preset automatically, listed in Table 282. A value specified for these
546 variables using -D takes precedence over the automatic presets. Once
547 set, a variable's value can be inserted into an SQL command by writing
548 :variablename. When running more than one client session, each session
549 has its own set of variables. pgbench supports up to 255 variable uses
550 in one statement.
551
552 Table 282. pgbench Automatic Variables
553 ┌─────────────┬────────────────────────────┐
554 │Variable │ Description │
555 ├─────────────┼────────────────────────────┤
556 │client_id │ unique number identifying │
557 │ │ the client session (starts │
558 │ │ from zero) │
559 ├─────────────┼────────────────────────────┤
560 │default_seed │ seed used in hash and │
561 │ │ pseudorandom permutation │
562 │ │ functions by default │
563 ├─────────────┼────────────────────────────┤
564 │random_seed │ random generator seed │
565 │ │ (unless overwritten with │
566 │ │ -D) │
567 ├─────────────┼────────────────────────────┤
568 │scale │ current scale factor │
569 └─────────────┴────────────────────────────┘
570
571 Script file meta commands begin with a backslash (\) and normally
572 extend to the end of the line, although they can be continued to
573 additional lines by writing backslash-return. Arguments to a meta
574 command are separated by white space. These meta commands are
575 supported:
576
577 \gset [prefix] \aset [prefix]
578 These commands may be used to end SQL queries, taking the place of
579 the terminating semicolon (;).
580
581 When the \gset command is used, the preceding SQL query is expected
582 to return one row, the columns of which are stored into variables
583 named after column names, and prefixed with prefix if provided.
584
585 When the \aset command is used, all combined SQL queries (separated
586 by \;) have their columns stored into variables named after column
587 names, and prefixed with prefix if provided. If a query returns no
588 row, no assignment is made and the variable can be tested for
589 existence to detect this. If a query returns more than one row, the
590 last value is kept.
591
592 \gset and \aset cannot be used in pipeline mode, since the query
593 results are not yet available by the time the commands would need
594 them.
595
596 The following example puts the final account balance from the first
597 query into variable abalance, and fills variables p_two and p_three
598 with integers from the third query. The result of the second query
599 is discarded. The result of the two last combined queries are
600 stored in variables four and five.
601
602 UPDATE pgbench_accounts
603 SET abalance = abalance + :delta
604 WHERE aid = :aid
605 RETURNING abalance \gset
606 -- compound of two queries
607 SELECT 1 \;
608 SELECT 2 AS two, 3 AS three \gset p_
609 SELECT 4 AS four \; SELECT 5 AS five \aset
610
611 \if expression
612 \elif expression
613 \else
614 \endif
615 This group of commands implements nestable conditional blocks,
616 similarly to psql's \if expression. Conditional expressions are
617 identical to those with \set, with non-zero values interpreted as
618 true.
619
620 \set varname expression
621 Sets variable varname to a value calculated from expression. The
622 expression may contain the NULL constant, Boolean constants TRUE
623 and FALSE, integer constants such as 5432, double constants such as
624 3.14159, references to variables :variablename, operators with
625 their usual SQL precedence and associativity, function calls, SQL
626 CASE generic conditional expressions and parentheses.
627
628 Functions and most operators return NULL on NULL input.
629
630 For conditional purposes, non zero numerical values are TRUE, zero
631 numerical values and NULL are FALSE.
632
633 Too large or small integer and double constants, as well as integer
634 arithmetic operators (+, -, * and /) raise errors on overflows.
635
636 When no final ELSE clause is provided to a CASE, the default value
637 is NULL.
638
639 Examples:
640
641 \set ntellers 10 * :scale
642 \set aid (1021 * random(1, 100000 * :scale)) % \
643 (100000 * :scale) + 1
644 \set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END
645
646 \sleep number [ us | ms | s ]
647 Causes script execution to sleep for the specified duration in
648 microseconds (us), milliseconds (ms) or seconds (s). If the unit is
649 omitted then seconds are the default. number can be either an
650 integer constant or a :variablename reference to a variable having
651 an integer value.
652
653 Example:
654
655 \sleep 10 ms
656
657 \setshell varname command [ argument ... ]
658 Sets variable varname to the result of the shell command command
659 with the given argument(s). The command must return an integer
660 value through its standard output.
661
662 command and each argument can be either a text constant or a
663 :variablename reference to a variable. If you want to use an
664 argument starting with a colon, write an additional colon at the
665 beginning of argument.
666
667 Example:
668
669 \setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
670
671 \shell command [ argument ... ]
672 Same as \setshell, but the result of the command is discarded.
673
674 Example:
675
676 \shell command literal_argument :variable ::literal_starting_with_colon
677
678 \startpipeline
679 \endpipeline
680 These commands delimit the start and end of a pipeline of SQL
681 statements. In pipeline mode, statements are sent to the server
682 without waiting for the results of previous statements. See
683 Section 34.5 for more details. Pipeline mode requires the use of
684 extended query protocol.
685
686 Built-in Operators
687 The arithmetic, bitwise, comparison and logical operators listed in
688 Table 283 are built into pgbench and may be used in expressions
689 appearing in \set. The operators are listed in increasing precedence
690 order. Except as noted, operators taking two numeric inputs will
691 produce a double value if either input is double, otherwise they
692 produce an integer result.
693
694 Table 283. pgbench Operators
695 ┌────────────────────────────────────────┐
696 │ │
697 │ Operator │
698 │ │
699 │ .PP Description │
700 │ │
701 │ .PP Example(s) │
702 ├────────────────────────────────────────┤
703 │ │
704 │ boolean OR boolean → boolean │
705 │ │
706 │ .PP Logical OR │
707 │ │
708 │ .PP 5 or 0 → TRUE │
709 ├────────────────────────────────────────┤
710 │ │
711 │ boolean AND boolean → boolean │
712 │ │
713 │ .PP Logical AND │
714 │ │
715 │ .PP 3 and 0 → FALSE │
716 ├────────────────────────────────────────┤
717 │ │
718 │ NOT boolean → boolean │
719 │ │
720 │ .PP Logical NOT │
721 │ │
722 │ .PP not false → TRUE │
723 ├────────────────────────────────────────┤
724 │ │
725 │ boolean IS [NOT] │
726 │ (NULL|TRUE|FALSE) → boolean │
727 │ │
728 │ .PP Boolean value tests │
729 │ │
730 │ .PP 1 is null → FALSE │
731 ├────────────────────────────────────────┤
732 │ │
733 │ value ISNULL|NOTNULL → boolean │
734 │ │
735 │ .PP Nullness tests │
736 │ │
737 │ .PP 1 notnull → TRUE │
738 ├────────────────────────────────────────┤
739 │ │
740 │ number = number → boolean │
741 │ │
742 │ .PP Equal │
743 │ │
744 │ .PP 5 = 4 → FALSE │
745 ├────────────────────────────────────────┤
746 │ │
747 │ number <> number → boolean │
748 │ │
749 │ .PP Not equal │
750 │ │
751 │ .PP 5 <> 4 → TRUE │
752 ├────────────────────────────────────────┤
753 │ │
754 │ number != number → boolean │
755 │ │
756 │ .PP Not equal │
757 │ │
758 │ .PP 5 != 5 → FALSE │
759 ├────────────────────────────────────────┤
760 │ │
761 │ number < number → boolean │
762 │ │
763 │ .PP Less than │
764 │ │
765 │ .PP 5 < 4 → FALSE │
766 ├────────────────────────────────────────┤
767 │ │
768 │ number <= number → boolean │
769 │ │
770 │ .PP Less than or equal to │
771 │ │
772 │ .PP 5 <= 4 → FALSE │
773 ├────────────────────────────────────────┤
774 │ │
775 │ number > number → boolean │
776 │ │
777 │ .PP Greater than │
778 │ │
779 │ .PP 5 > 4 → TRUE │
780 ├────────────────────────────────────────┤
781 │ │
782 │ number >= number → boolean │
783 │ │
784 │ .PP Greater than or equal │
785 │ to │
786 │ │
787 │ .PP 5 >= 4 → TRUE │
788 ├────────────────────────────────────────┤
789 │ │
790 │ integer | integer → integer │
791 │ │
792 │ .PP Bitwise OR │
793 │ │
794 │ .PP 1 | 2 → 3 │
795 ├────────────────────────────────────────┤
796 │ │
797 │ integer # integer → integer │
798 │ │
799 │ .PP Bitwise XOR │
800 │ │
801 │ .PP 1 # 3 → 2 │
802 ├────────────────────────────────────────┤
803 │ │
804 │ integer & integer → integer │
805 │ │
806 │ .PP Bitwise AND │
807 │ │
808 │ .PP 1 & 3 → 1 │
809 ├────────────────────────────────────────┤
810 │ │
811 │ ~ integer → integer │
812 │ │
813 │ .PP Bitwise NOT │
814 │ │
815 │ .PP ~ 1 → -2 │
816 ├────────────────────────────────────────┤
817 │ │
818 │ integer << integer → integer │
819 │ │
820 │ .PP Bitwise shift left │
821 │ │
822 │ .PP 1 << 2 → 4 │
823 ├────────────────────────────────────────┤
824 │ │
825 │ integer >> integer → integer │
826 │ │
827 │ .PP Bitwise shift right │
828 │ │
829 │ .PP 8 >> 2 → 2 │
830 ├────────────────────────────────────────┤
831 │ │
832 │ number + number → number │
833 │ │
834 │ .PP Addition │
835 │ │
836 │ .PP 5 + 4 → 9 │
837 ├────────────────────────────────────────┤
838 │ │
839 │ number - number → number │
840 │ │
841 │ .PP Subtraction │
842 │ │
843 │ .PP 3 - 2.0 → 1.0 │
844 ├────────────────────────────────────────┤
845 │ │
846 │ number * number → number │
847 │ │
848 │ .PP Multiplication │
849 │ │
850 │ .PP 5 * 4 → 20 │
851 ├────────────────────────────────────────┤
852 │ │
853 │ number / number → number │
854 │ │
855 │ .PP Division (truncates │
856 │ the result towards zero if both │
857 │ inputs are integers) │
858 │ │
859 │ .PP 5 / 3 → 1 │
860 ├────────────────────────────────────────┤
861 │ │
862 │ integer % integer → integer │
863 │ │
864 │ .PP Modulo (remainder) │
865 │ │
866 │ .PP 3 % 2 → 1 │
867 ├────────────────────────────────────────┤
868 │ │
869 │ - number → number │
870 │ │
871 │ .PP Negation │
872 │ │
873 │ .PP - 2.0 → -2.0 │
874 └────────────────────────────────────────┘
875
876 Built-In Functions
877 The functions listed in Table 284 are built into pgbench and may be
878 used in expressions appearing in \set.
879
880 Table 284. pgbench Functions
881 ┌────────────────────────────────────────┐
882 │ │
883 │ Function │
884 │ │
885 │ .PP Description │
886 │ │
887 │ .PP Example(s) │
888 ├────────────────────────────────────────┤
889 │ │
890 │ abs ( number ) → same type as │
891 │ input │
892 │ │
893 │ .PP Absolute value │
894 │ │
895 │ .PP abs(-17) → 17 │
896 ├────────────────────────────────────────┤
897 │ │
898 │ debug ( number ) → same type as │
899 │ input │
900 │ │
901 │ .PP Prints the argument │
902 │ to stderr, and returns the │
903 │ argument. │
904 │ │
905 │ .PP debug(5432.1) → │
906 │ 5432.1 │
907 ├────────────────────────────────────────┤
908 │ │
909 │ double ( number ) → double │
910 │ │
911 │ .PP Casts to double. │
912 │ │
913 │ .PP double(5432) → 5432.0 │
914 ├────────────────────────────────────────┤
915 │ │
916 │ exp ( number ) → double │
917 │ │
918 │ .PP Exponential (e raised │
919 │ to the given power) │
920 │ │
921 │ .PP exp(1.0) → │
922 │ 2.718281828459045 │
923 ├────────────────────────────────────────┤
924 │ │
925 │ greatest ( number [, ... ] ) → │
926 │ double if any argument is │
927 │ double, else integer │
928 │ │
929 │ .PP Selects the largest │
930 │ value among the arguments. │
931 │ │
932 │ .PP greatest(5, 4, 3, 2) │
933 │ → 5 │
934 ├────────────────────────────────────────┤
935 │ │
936 │ hash ( value [, seed ] ) → │
937 │ integer │
938 │ │
939 │ .PP This is an alias for │
940 │ hash_murmur2. │
941 │ │
942 │ .PP hash(10, 5432) → │
943 │ -5817877081768721676 │
944 ├────────────────────────────────────────┤
945 │ │
946 │ hash_fnv1a ( value [, seed ] ) → │
947 │ integer │
948 │ │
949 │ .PP Computes FNV-1a hash. │
950 │ │
951 │ .PP hash_fnv1a(10, 5432) │
952 │ → -7793829335365542153 │
953 ├────────────────────────────────────────┤
954 │ │
955 │ hash_murmur2 ( value [, seed ] ) │
956 │ → integer │
957 │ │
958 │ .PP Computes MurmurHash2 │
959 │ hash. │
960 │ │
961 │ .PP hash_murmur2(10, │
962 │ 5432) → -5817877081768721676 │
963 ├────────────────────────────────────────┤
964 │ │
965 │ int ( number ) → integer │
966 │ │
967 │ .PP Casts to integer. │
968 │ │
969 │ .PP int(5.4 + 3.8) → 9 │
970 ├────────────────────────────────────────┤
971 │ │
972 │ least ( number [, ... ] ) → │
973 │ double if any argument is │
974 │ double, else integer │
975 │ │
976 │ .PP Selects the smallest │
977 │ value among the arguments. │
978 │ │
979 │ .PP least(5, 4, 3, 2.1) → │
980 │ 2.1 │
981 ├────────────────────────────────────────┤
982 │ │
983 │ ln ( number ) → double │
984 │ │
985 │ .PP Natural logarithm │
986 │ │
987 │ .PP ln(2.718281828459045) │
988 │ → 1.0 │
989 ├────────────────────────────────────────┤
990 │ │
991 │ mod ( integer, integer ) → │
992 │ integer │
993 │ │
994 │ .PP Modulo (remainder) │
995 │ │
996 │ .PP mod(54, 32) → 22 │
997 ├────────────────────────────────────────┤
998 │ │
999 │ permute ( i, size [, seed ] ) → │
1000 │ integer │
1001 │ │
1002 │ .PP Permuted value of i, │
1003 │ in the range [0, size). This is │
1004 │ the new position of i (modulo │
1005 │ size) in a pseudorandom │
1006 │ permutation of the integers │
1007 │ 0...size-1, parameterized by │
1008 │ seed, see below. │
1009 │ │
1010 │ .PP permute(0, 4) → an │
1011 │ integer between 0 and 3 │
1012 ├────────────────────────────────────────┤
1013 │ │
1014 │ pi () → double │
1015 │ │
1016 │ .PP Approximate value of │
1017 │ π │
1018 │ │
1019 │ .PP pi() → │
1020 │ 3.14159265358979323846 │
1021 ├────────────────────────────────────────┤
1022 │ │
1023 │ pow ( x, y ) → double │
1024 │ │
1025 │ .PP power ( x, y ) → │
1026 │ double │
1027 │ │
1028 │ .PP x raised to the power │
1029 │ of y │
1030 │ │
1031 │ .PP pow(2.0, 10) → 1024.0 │
1032 ├────────────────────────────────────────┤
1033 │ │
1034 │ random ( lb, ub ) → integer │
1035 │ │
1036 │ .PP Computes a │
1037 │ uniformly-distributed random │
1038 │ integer in [lb, ub]. │
1039 │ │
1040 │ .PP random(1, 10) → an │
1041 │ integer between 1 and 10 │
1042 ├────────────────────────────────────────┤
1043 │ │
1044 │ random_exponential ( lb, ub, │
1045 │ parameter ) → integer │
1046 │ │
1047 │ .PP Computes an │
1048 │ exponentially-distributed random │
1049 │ integer in [lb, ub], see below. │
1050 │ │
1051 │ .PP random_exponential(1, │
1052 │ 10, 3.0) → an integer between 1 │
1053 │ and 10 │
1054 ├────────────────────────────────────────┤
1055 │ │
1056 │ random_gaussian ( lb, ub, │
1057 │ parameter ) → integer │
1058 │ │
1059 │ .PP Computes a │
1060 │ Gaussian-distributed random │
1061 │ integer in [lb, ub], see below. │
1062 │ │
1063 │ .PP random_gaussian(1, │
1064 │ 10, 2.5) → an integer between 1 │
1065 │ and 10 │
1066 ├────────────────────────────────────────┤
1067 │ │
1068 │ random_zipfian ( lb, ub, │
1069 │ parameter ) → integer │
1070 │ │
1071 │ .PP Computes a │
1072 │ Zipfian-distributed random │
1073 │ integer in [lb, ub], see below. │
1074 │ │
1075 │ .PP random_zipfian(1, 10, │
1076 │ 1.5) → an integer between 1 and │
1077 │ 10 │
1078 ├────────────────────────────────────────┤
1079 │ │
1080 │ sqrt ( number ) → double │
1081 │ │
1082 │ .PP Square root │
1083 │ │
1084 │ .PP sqrt(2.0) → │
1085 │ 1.414213562 │
1086 └────────────────────────────────────────┘
1087
1088 The random function generates values using a uniform distribution, that
1089 is all the values are drawn within the specified range with equal
1090 probability. The random_exponential, random_gaussian and random_zipfian
1091 functions require an additional double parameter which determines the
1092 precise shape of the distribution.
1093
1094 • For an exponential distribution, parameter controls the
1095 distribution by truncating a quickly-decreasing exponential
1096 distribution at parameter, and then projecting onto integers
1097 between the bounds. To be precise, with
1098
1099 f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter))
1100
1101 Then value i between min and max inclusive is drawn with
1102 probability: f(i) - f(i + 1).
1103
1104 Intuitively, the larger the parameter, the more frequently values
1105 close to min are accessed, and the less frequently values close to
1106 max are accessed. The closer to 0 parameter is, the flatter (more
1107 uniform) the access distribution. A crude approximation of the
1108 distribution is that the most frequent 1% values in the range,
1109 close to min, are drawn parameter% of the time. The parameter value
1110 must be strictly positive.
1111
1112 • For a Gaussian distribution, the interval is mapped onto a standard
1113 normal distribution (the classical bell-shaped Gaussian curve)
1114 truncated at -parameter on the left and +parameter on the right.
1115 Values in the middle of the interval are more likely to be drawn.
1116 To be precise, if PHI(x) is the cumulative distribution function of
1117 the standard normal distribution, with mean mu defined as (max +
1118 min) / 2.0, with
1119
1120 f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
1121 (2.0 * PHI(parameter) - 1)
1122
1123 then value i between min and max inclusive is drawn with
1124 probability: f(i + 0.5) - f(i - 0.5). Intuitively, the larger the
1125 parameter, the more frequently values close to the middle of the
1126 interval are drawn, and the less frequently values close to the min
1127 and max bounds. About 67% of values are drawn from the middle 1.0 /
1128 parameter, that is a relative 0.5 / parameter around the mean, and
1129 95% in the middle 2.0 / parameter, that is a relative 1.0 /
1130 parameter around the mean; for instance, if parameter is 4.0, 67%
1131 of values are drawn from the middle quarter (1.0 / 4.0) of the
1132 interval (i.e., from 3.0 / 8.0 to 5.0 / 8.0) and 95% from the
1133 middle half (2.0 / 4.0) of the interval (second and third
1134 quartiles). The minimum allowed parameter value is 2.0.
1135
1136 • random_zipfian generates a bounded Zipfian distribution. parameter
1137 defines how skewed the distribution is. The larger the parameter,
1138 the more frequently values closer to the beginning of the interval
1139 are drawn. The distribution is such that, assuming the range starts
1140 from 1, the ratio of the probability of drawing k versus drawing
1141 k+1 is ((k+1)/k)**parameter. For example, random_zipfian(1, ...,
1142 2.5) produces the value 1 about (2/1)**2.5 = 5.66 times more
1143 frequently than 2, which itself is produced (3/2)**2.5 = 2.76 times
1144 more frequently than 3, and so on.
1145
1146 pgbench's implementation is based on "Non-Uniform Random Variate
1147 Generation", Luc Devroye, p. 550-551, Springer 1986. Due to
1148 limitations of that algorithm, the parameter value is restricted to
1149 the range [1.001, 1000].
1150
1151 Note
1152 When designing a benchmark which selects rows non-uniformly, be
1153 aware that the rows chosen may be correlated with other data such
1154 as IDs from a sequence or the physical row ordering, which may skew
1155 performance measurements.
1156
1157 To avoid this, you may wish to use the permute function, or some
1158 other additional step with similar effect, to shuffle the selected
1159 rows and remove such correlations.
1160
1161 Hash functions hash, hash_murmur2 and hash_fnv1a accept an input value
1162 and an optional seed parameter. In case the seed isn't provided the
1163 value of :default_seed is used, which is initialized randomly unless
1164 set by the command-line -D option.
1165
1166 permute accepts an input value, a size, and an optional seed parameter.
1167 It generates a pseudorandom permutation of integers in the range [0,
1168 size), and returns the index of the input value in the permuted values.
1169 The permutation chosen is parameterized by the seed, which defaults to
1170 :default_seed, if not specified. Unlike the hash functions, permute
1171 ensures that there are no collisions or holes in the output values.
1172 Input values outside the interval are interpreted modulo the size. The
1173 function raises an error if the size is not positive. permute can be
1174 used to scatter the distribution of non-uniform random functions such
1175 as random_zipfian or random_exponential so that values drawn more often
1176 are not trivially correlated. For instance, the following pgbench
1177 script simulates a possible real world workload typical for social
1178 media and blogging platforms where a few accounts generate excessive
1179 load:
1180
1181 \set size 1000000
1182 \set r random_zipfian(1, :size, 1.07)
1183 \set k 1 + permute(:r, :size)
1184
1185 In some cases several distinct distributions are needed which don't
1186 correlate with each other and this is when the optional seed parameter
1187 comes in handy:
1188
1189 \set k1 1 + permute(:r, :size, :default_seed + 123)
1190 \set k2 1 + permute(:r, :size, :default_seed + 321)
1191
1192 A similar behavior can also be approximated with hash:
1193
1194 \set size 1000000
1195 \set r random_zipfian(1, 100 * :size, 1.07)
1196 \set k 1 + abs(hash(:r)) % :size
1197
1198 However, since hash generates collisions, some values will not be
1199 reachable and others will be more frequent than expected from the
1200 original distribution.
1201
1202 As an example, the full definition of the built-in TPC-B-like
1203 transaction is:
1204
1205 \set aid random(1, 100000 * :scale)
1206 \set bid random(1, 1 * :scale)
1207 \set tid random(1, 10 * :scale)
1208 \set delta random(-5000, 5000)
1209 BEGIN;
1210 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
1211 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
1212 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
1213 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
1214 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
1215 END;
1216
1217 This script allows each iteration of the transaction to reference
1218 different, randomly-chosen rows. (This example also shows why it's
1219 important for each client session to have its own variables — otherwise
1220 they'd not be independently touching different rows.)
1221
1222 Per-Transaction Logging
1223 With the -l option (but without the --aggregate-interval option),
1224 pgbench writes information about each transaction to a log file. The
1225 log file will be named prefix.nnn, where prefix defaults to
1226 pgbench_log, and nnn is the PID of the pgbench process. The prefix can
1227 be changed by using the --log-prefix option. If the -j option is 2 or
1228 higher, so that there are multiple worker threads, each will have its
1229 own log file. The first worker will use the same name for its log file
1230 as in the standard single worker case. The additional log files for the
1231 other workers will be named prefix.nnn.mmm, where mmm is a sequential
1232 number for each worker starting with 1.
1233
1234 The format of the log is:
1235
1236 client_id transaction_no time script_no time_epoch time_us [ schedule_lag ]
1237
1238 where client_id indicates which client session ran the transaction,
1239 transaction_no counts how many transactions have been run by that
1240 session, time is the total elapsed transaction time in microseconds,
1241 script_no identifies which script file was used (useful when multiple
1242 scripts were specified with -f or -b), and time_epoch/time_us are a
1243 Unix-epoch time stamp and an offset in microseconds (suitable for
1244 creating an ISO 8601 time stamp with fractional seconds) showing when
1245 the transaction completed. The schedule_lag field is the difference
1246 between the transaction's scheduled start time, and the time it
1247 actually started, in microseconds. It is only present when the --rate
1248 option is used. When both --rate and --latency-limit are used, the time
1249 for a skipped transaction will be reported as skipped.
1250
1251 Here is a snippet of a log file generated in a single-client run:
1252
1253 0 199 2241 0 1175850568 995598
1254 0 200 2465 0 1175850568 998079
1255 0 201 2513 0 1175850569 608
1256 0 202 2038 0 1175850569 2663
1257
1258 Another example with --rate=100 and --latency-limit=5 (note the
1259 additional schedule_lag column):
1260
1261 0 81 4621 0 1412881037 912698 3005
1262 0 82 6173 0 1412881037 914578 4304
1263 0 83 skipped 0 1412881037 914578 5217
1264 0 83 skipped 0 1412881037 914578 5099
1265 0 83 4722 0 1412881037 916203 3108
1266 0 84 4142 0 1412881037 918023 2333
1267 0 85 2465 0 1412881037 919759 740
1268
1269 In this example, transaction 82 was late, because its latency (6.173
1270 ms) was over the 5 ms limit. The next two transactions were skipped,
1271 because they were already late before they were even started.
1272
1273 When running a long test on hardware that can handle a lot of
1274 transactions, the log files can become very large. The --sampling-rate
1275 option can be used to log only a random sample of transactions.
1276
1277 Aggregated Logging
1278 With the --aggregate-interval option, a different format is used for
1279 the log files:
1280
1281 interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency [ sum_lag sum_lag_2 min_lag max_lag [ skipped ] ]
1282
1283 where interval_start is the start of the interval (as a Unix epoch time
1284 stamp), num_transactions is the number of transactions within the
1285 interval, sum_latency is the sum of the transaction latencies within
1286 the interval, sum_latency_2 is the sum of squares of the transaction
1287 latencies within the interval, min_latency is the minimum latency
1288 within the interval, and max_latency is the maximum latency within the
1289 interval. The next fields, sum_lag, sum_lag_2, min_lag, and max_lag,
1290 are only present if the --rate option is used. They provide statistics
1291 about the time each transaction had to wait for the previous one to
1292 finish, i.e., the difference between each transaction's scheduled start
1293 time and the time it actually started. The very last field, skipped, is
1294 only present if the --latency-limit option is used, too. It counts the
1295 number of transactions skipped because they would have started too
1296 late. Each transaction is counted in the interval when it was
1297 committed.
1298
1299 Here is some example output:
1300
1301 1345828501 5601 1542744 483552416 61 2573
1302 1345828503 7884 1979812 565806736 60 1479
1303 1345828505 7208 1979422 567277552 59 1391
1304 1345828507 7685 1980268 569784714 60 1398
1305 1345828509 7073 1979779 573489941 236 1411
1306
1307 Notice that while the plain (unaggregated) log file shows which script
1308 was used for each transaction, the aggregated log does not. Therefore
1309 if you need per-script data, you need to aggregate the data on your
1310 own.
1311
1312 Per-Statement Latencies
1313 With the -r option, pgbench collects the elapsed transaction time of
1314 each statement executed by every client. It then reports an average of
1315 those values, referred to as the latency for each statement, after the
1316 benchmark has finished.
1317
1318 For the default script, the output will look similar to this:
1319
1320 starting vacuum...end.
1321 transaction type: <builtin: TPC-B (sort of)>
1322 scaling factor: 1
1323 query mode: simple
1324 number of clients: 10
1325 number of threads: 1
1326 number of transactions per client: 1000
1327 number of transactions actually processed: 10000/10000
1328 latency average = 10.870 ms
1329 latency stddev = 7.341 ms
1330 initial connection time = 30.954 ms
1331 tps = 907.949122 (without initial connection time)
1332 statement latencies in milliseconds:
1333 0.001 \set aid random(1, 100000 * :scale)
1334 0.001 \set bid random(1, 1 * :scale)
1335 0.001 \set tid random(1, 10 * :scale)
1336 0.000 \set delta random(-5000, 5000)
1337 0.046 BEGIN;
1338 0.151 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
1339 0.107 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
1340 4.241 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
1341 5.245 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
1342 0.102 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
1343 0.974 END;
1344
1345 If multiple script files are specified, the averages are reported
1346 separately for each script file.
1347
1348 Note that collecting the additional timing information needed for
1349 per-statement latency computation adds some overhead. This will slow
1350 average execution speed and lower the computed TPS. The amount of
1351 slowdown varies significantly depending on platform and hardware.
1352 Comparing average TPS values with and without latency reporting enabled
1353 is a good way to measure if the timing overhead is significant.
1354
1355 Good Practices
1356 It is very easy to use pgbench to produce completely meaningless
1357 numbers. Here are some guidelines to help you get useful results.
1358
1359 In the first place, never believe any test that runs for only a few
1360 seconds. Use the -t or -T option to make the run last at least a few
1361 minutes, so as to average out noise. In some cases you could need hours
1362 to get numbers that are reproducible. It's a good idea to try the test
1363 run a few times, to find out if your numbers are reproducible or not.
1364
1365 For the default TPC-B-like test scenario, the initialization scale
1366 factor (-s) should be at least as large as the largest number of
1367 clients you intend to test (-c); else you'll mostly be measuring update
1368 contention. There are only -s rows in the pgbench_branches table, and
1369 every transaction wants to update one of them, so -c values in excess
1370 of -s will undoubtedly result in lots of transactions blocked waiting
1371 for other transactions.
1372
1373 The default test scenario is also quite sensitive to how long it's been
1374 since the tables were initialized: accumulation of dead rows and dead
1375 space in the tables changes the results. To understand the results you
1376 must keep track of the total number of updates and when vacuuming
1377 happens. If autovacuum is enabled it can result in unpredictable
1378 changes in measured performance.
1379
1380 A limitation of pgbench is that it can itself become the bottleneck
1381 when trying to test a large number of client sessions. This can be
1382 alleviated by running pgbench on a different machine from the database
1383 server, although low network latency will be essential. It might even
1384 be useful to run several pgbench instances concurrently, on several
1385 client machines, against the same database server.
1386
1387 Security
1388 If untrusted users have access to a database that has not adopted a
1389 secure schema usage pattern, do not run pgbench in that database.
1390 pgbench uses unqualified names and does not manipulate the search path.
1391
1392
1393
1394PostgreSQL 14.3 2022 PGBENCH(1)