1PG_TEST_TIMING(1)        PostgreSQL 10.7 Documentation       PG_TEST_TIMING(1)
2
3
4

NAME

6       pg_test_timing - measure timing overhead
7

SYNOPSIS

9       pg_test_timing [option...]
10

DESCRIPTION

12       pg_test_timing is a tool to measure the timing overhead on your system
13       and confirm that the system time never moves backwards. Systems that
14       are slow to collect timing data can give less accurate EXPLAIN ANALYZE
15       results.
16

OPTIONS

18       pg_test_timing accepts the following command-line options:
19
20       -d duration
21       --duration=duration
22           Specifies the test duration, in seconds. Longer durations give
23           slightly better accuracy, and are more likely to discover problems
24           with the system clock moving backwards. The default test duration
25           is 3 seconds.
26
27       -V
28       --version
29           Print the pg_test_timing version and exit.
30
31       -?
32       --help
33           Show help about pg_test_timing command line arguments, and exit.
34

USAGE

36   Interpreting results
37       Good results will show most (>90%) individual timing calls take less
38       than one microsecond. Average per loop overhead will be even lower,
39       below 100 nanoseconds. This example from an Intel i7-860 system using a
40       TSC clock source shows excellent performance:
41
42           Testing timing overhead for 3 seconds.
43           Per loop time including overhead: 35.96 ns
44           Histogram of timing durations:
45             < us   % of total      count
46                1     96.40465   80435604
47                2      3.59518    2999652
48                4      0.00015        126
49                8      0.00002         13
50               16      0.00000          2
51
52       Note that different units are used for the per loop time than the
53       histogram. The loop can have resolution within a few nanoseconds (ns),
54       while the individual timing calls can only resolve down to one
55       microsecond (us).
56
57   Measuring executor timing overhead
58       When the query executor is running a statement using EXPLAIN ANALYZE,
59       individual operations are timed as well as showing a summary. The
60       overhead of your system can be checked by counting rows with the psql
61       program:
62
63           CREATE TABLE t AS SELECT * FROM generate_series(1,100000);
64           \timing
65           SELECT COUNT(*) FROM t;
66           EXPLAIN ANALYZE SELECT COUNT(*) FROM t;
67
68       The i7-860 system measured runs the count query in 9.8 ms while the
69       EXPLAIN ANALYZE version takes 16.6 ms, each processing just over
70       100,000 rows. That 6.8 ms difference means the timing overhead per row
71       is 68 ns, about twice what pg_test_timing estimated it would be. Even
72       that relatively small amount of overhead is making the fully timed
73       count statement take almost 70% longer. On more substantial queries,
74       the timing overhead would be less problematic.
75
76   Changing time sources
77       On some newer Linux systems, it's possible to change the clock source
78       used to collect timing data at any time. A second example shows the
79       slowdown possible from switching to the slower acpi_pm time source, on
80       the same system used for the fast results above:
81
82           # cat /sys/devices/system/clocksource/clocksource0/available_clocksource
83           tsc hpet acpi_pm
84           # echo acpi_pm > /sys/devices/system/clocksource/clocksource0/current_clocksource
85           # pg_test_timing
86           Per loop time including overhead: 722.92 ns
87           Histogram of timing durations:
88             < us   % of total      count
89                1     27.84870    1155682
90                2     72.05956    2990371
91                4      0.07810       3241
92                8      0.01357        563
93               16      0.00007          3
94
95       In this configuration, the sample EXPLAIN ANALYZE above takes 115.9 ms.
96       That's 1061 ns of timing overhead, again a small multiple of what's
97       measured directly by this utility. That much timing overhead means the
98       actual query itself is only taking a tiny fraction of the accounted for
99       time, most of it is being consumed in overhead instead. In this
100       configuration, any EXPLAIN ANALYZE totals involving many timed
101       operations would be inflated significantly by timing overhead.
102
103       FreeBSD also allows changing the time source on the fly, and it logs
104       information about the timer selected during boot:
105
106           # dmesg | grep "Timecounter"
107           Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
108           Timecounter "i8254" frequency 1193182 Hz quality 0
109           Timecounters tick every 10.000 msec
110           Timecounter "TSC" frequency 2531787134 Hz quality 800
111           # sysctl kern.timecounter.hardware=TSC
112           kern.timecounter.hardware: ACPI-fast -> TSC
113
114       Other systems may only allow setting the time source on boot. On older
115       Linux systems the "clock" kernel setting is the only way to make this
116       sort of change. And even on some more recent ones, the only option
117       you'll see for a clock source is "jiffies". Jiffies are the older Linux
118       software clock implementation, which can have good resolution when it's
119       backed by fast enough timing hardware, as in this example:
120
121           $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
122           jiffies
123           $ dmesg | grep time.c
124           time.c: Using 3.579545 MHz WALL PM GTOD PIT/TSC timer.
125           time.c: Detected 2400.153 MHz processor.
126           $ pg_test_timing
127           Testing timing overhead for 3 seconds.
128           Per timing duration including loop overhead: 97.75 ns
129           Histogram of timing durations:
130             < us   % of total      count
131                1     90.23734   27694571
132                2      9.75277    2993204
133                4      0.00981       3010
134                8      0.00007         22
135               16      0.00000          1
136               32      0.00000          1
137
138   Clock hardware and timing accuracy
139       Collecting accurate timing information is normally done on computers
140       using hardware clocks with various levels of accuracy. With some
141       hardware the operating systems can pass the system clock time almost
142       directly to programs. A system clock can also be derived from a chip
143       that simply provides timing interrupts, periodic ticks at some known
144       time interval. In either case, operating system kernels provide a clock
145       source that hides these details. But the accuracy of that clock source
146       and how quickly it can return results varies based on the underlying
147       hardware.
148
149       Inaccurate time keeping can result in system instability. Test any
150       change to the clock source very carefully. Operating system defaults
151       are sometimes made to favor reliability over best accuracy. And if you
152       are using a virtual machine, look into the recommended time sources
153       compatible with it. Virtual hardware faces additional difficulties when
154       emulating timers, and there are often per operating system settings
155       suggested by vendors.
156
157       The Time Stamp Counter (TSC) clock source is the most accurate one
158       available on current generation CPUs. It's the preferred way to track
159       the system time when it's supported by the operating system and the TSC
160       clock is reliable. There are several ways that TSC can fail to provide
161       an accurate timing source, making it unreliable. Older systems can have
162       a TSC clock that varies based on the CPU temperature, making it
163       unusable for timing. Trying to use TSC on some older multicore CPUs can
164       give a reported time that's inconsistent among multiple cores. This can
165       result in the time going backwards, a problem this program checks for.
166       And even the newest systems can fail to provide accurate TSC timing
167       with very aggressive power saving configurations.
168
169       Newer operating systems may check for the known TSC problems and switch
170       to a slower, more stable clock source when they are seen. If your
171       system supports TSC time but doesn't default to that, it may be
172       disabled for a good reason. And some operating systems may not detect
173       all the possible problems correctly, or will allow using TSC even in
174       situations where it's known to be inaccurate.
175
176       The High Precision Event Timer (HPET) is the preferred timer on systems
177       where it's available and TSC is not accurate. The timer chip itself is
178       programmable to allow up to 100 nanosecond resolution, but you may not
179       see that much accuracy in your system clock.
180
181       Advanced Configuration and Power Interface (ACPI) provides a Power
182       Management (PM) Timer, which Linux refers to as the acpi_pm. The clock
183       derived from acpi_pm will at best provide 300 nanosecond resolution.
184
185       Timers used on older PC hardware include the 8254 Programmable Interval
186       Timer (PIT), the real-time clock (RTC), the Advanced Programmable
187       Interrupt Controller (APIC) timer, and the Cyclone timer. These timers
188       aim for millisecond resolution.
189

SEE ALSO

191       EXPLAIN(7)
192
193
194
195PostgreSQL 10.7                      2019                    PG_TEST_TIMING(1)
Impressum