1PMAPI(3) Library Functions Manual PMAPI(3)
2
3
4
6 PMAPI - introduction to the Performance Metrics Application Programming
7 Interface
8
10 #include <pcp/pmapi.h>
11
12 ... assorted routines ...
13
14 cc ... -lpcp
15
17 Within the framework of the Performance Co-Pilot (PCP), client applica‐
18 tions are developed using the Performance Metrics Application Program‐
19 ming Interface (PMAPI) that defines a procedural interface with ser‐
20 vices suited to the development of applications with a particular
21 interest in performance metrics.
22
23 This description presents an overview of the PMAPI and the context in
24 which PMAPI applications are run. The PMAPI is more fully described in
25 the Performance Co-Pilot Programmer's Guide, and the manual pages for
26 the individual PMAPI routines.
27
29 For a description of the Performance Metrics Name Space (PMNS) and
30 associated terms and concepts, see PCPIntro(1).
31
32 Not all PMIDs need be represented in the PMNS of every application.
33 For example, an application which monitors disk traffic will likely use
34 a name space which references only the PMIDs for I/O statistics.
35
36 Applications which use the PMAPI may have independent versions of a
37 PMNS, constructed from an initialization file when the application
38 starts; see pmLoadASCIINameSpace(3), pmLoadNameSpace(3), and pmns(5).
39
40 Internally (below the PMAPI) the implementation of the Performance Met‐
41 rics Collection System (PMCS) uses only the PMIDs, and a PMNS provides
42 an external mapping from a hierarchic taxonomy of names to PMIDs that
43 is convenient in the context of a particular system or particular use
44 of the PMAPI. For the applications programmer, the routines pmLookup‐
45 Name(3) and pmNameID(3) translate between names in a PMNS and PMIDs,
46 and vice versa. The PMNS may be traversed using pmGetChildren(3).
47
49 An application using the PMAPI may manipulate several concurrent con‐
50 texts, each associated with a source of performance metrics, e.g.
51 pmcd(1) on some host, or an archive log of performance metrics as cre‐
52 ated by pmlogger(1).
53
54 Contexts are identified by a ``handle'', a small integer value that is
55 returned when the context is created; see pmNewContext(3) and pmDupCon‐
56 text(3). Some PMAPI functions require an explicit ``handle'' to iden‐
57 tify the correct context, but more commonly the PMAPI function is exe‐
58 cuted in the ``current'' context. The current context may be discov‐
59 ered using pmWhichContext(3) and changed using pmUseContext(3).
60
61 If a PMAPI context has not been explicitly established (or the previous
62 current context has been closed using pmDestroyContext(3)) then the
63 current PMAPI context is undefined.
64
65 In addition to the source of the performance metrics, the context also
66 includes the instance profile and collection time (both described
67 below) which controls how much information is returned, and when the
68 information was collected.
69
71 When performance metric values are returned across the PMAPI to a
72 requesting application, there may be more than one value for a particu‐
73 lar metric. Multiple values, or instances, for a single metric are
74 typically the result of instrumentation being implemented for each
75 instance of a set of similar components or services in a system, e.g.
76 independent counts for each CPU, or each process, or each disk, or each
77 system call type, etc. This multiplicity of values is not enumerated
78 in the name space but rather, when performance metrics are delivered
79 across the PMAPI by pmFetch(3), the format of the result accommodates
80 values for one or more instances, with an instance-value pair encoding
81 the metric value for a particular instance.
82
83 The instances are identified by an internal identifier assigned by the
84 agent responsible for instantiating the values for the associated per‐
85 formance metric. Each instance identifier has a corresponding external
86 instance identifier name (an ASCII string). The routines
87 pmGetInDom(3), pmLookupInDom(3) and pmNameInDom(3) may be used to enu‐
88 merate all instance identifiers, and to translate between internal and
89 external instance identifiers.
90
91 All of the instance identifiers for a particular performance metric are
92 collectively known as an instance domain. Multiple performance metrics
93 may share the same instance domain.
94
95 If only one instance is ever available for a particular performance
96 metric, the instance identifier in the result from pmFetch(3) assumes
97 the special value PM_IN_NULL and may be ignored by the application, and
98 only one instance-value pair appears in the result for that metric.
99 Under these circumstances, the associated instance domain (as returned
100 via pmLookupDesc(3)) is set to PM_INDOM_NULL to indicate that values
101 for this metric are singular.
102
103 The difficult issue of transient performance metrics (e.g. per-filesys‐
104 tem information, hot-plug replaceable hardware modules, etc.) means
105 that repeated requests for the same PMID may return different numbers
106 of values, and/or some changes in the particular instance identifiers
107 returned. This means applications need to be aware that metric instan‐
108 tiation is guaranteed to be valid at the time of collection only. Sim‐
109 ilar rules apply to the transient semantics of the associated metric
110 values. In general however, it is expected that the bulk of the per‐
111 formance metrics will have instantiation semantics that are fixed over
112 the execution life-time of any PMAPI client.
113
115 The PMAPI supports a wide range of format and type encodings for the
116 values of performance metrics, namely signed and unsigned integers,
117 floating point numbers, 32-bit and 64-bit encodings of all of the
118 above, ASCII strings (C-style, NULL byte terminated), and arbitrary
119 aggregates of binary data.
120
121 The type field in the pmDesc structure returned by pmLookupDesc(3)
122 identifies the format and type of the values for a particular perfor‐
123 mance metric within a particular PMAPI context.
124
125 Note that the encoding of values for a particular performance metric
126 may be different for different PMAPI contexts, due to differences in
127 the underlying implementation for different contexts. However it is
128 expected that the vast majority of performance metrics will have con‐
129 sistent value encoding across all versions of all implementations, and
130 hence across all PMAPI contexts.
131
132 The PMAPI supports routines to automate the handling of the various
133 value formats and types, particularly for the common case where conver‐
134 sion to a canonical format is desired, see pmExtractValue(3) and
135 pmPrintValue(3).
136
138 Independent of how the value is encoded, the value for a performance
139 metric is assumed to be drawn from a set of values that can be
140 described in terms of their dimensionality and scale by a compact
141 encoding as follows. The dimensionality is defined by a power, or
142 index, in each of 3 orthogonal dimensions, namely Space, Time and Count
143 (or Events, which are dimensionless). For example I/O throughput might
144 be represented as Space/Time, while the running total of system calls
145 is Count, memory allocation is Space and average service time is
146 Time/Count. In each dimension there are a number of common scale val‐
147 ues that may be used to better encode ranges that might otherwise
148 exhaust the precision of a 32-bit value. This information is encoded
149 in the pmUnits structure which is embedded in the pmDesc structure
150 returned from pmLookupDesc(3).
151
152 The routine pmConvScale(3) is provided to convert values in conjunction
153 with the pmUnits structures that defines the dimensionality and scale
154 of the values for a particular performance metric as returned from
155 pmFetch(3), and the desired dimensionality and scale of the value the
156 PMAPI client wishes to manipulate.
157
159 The set of instances for performance metrics returned from a pmFetch(3)
160 call may be filtered or restricted using an instance profile. There is
161 one instance profile for each PMAPI context the application creates,
162 and each instance profile may include instances from one or more
163 instance domains.
164
165 The routines pmAddProfile(3) and pmDelProfile(3) may be used to dynami‐
166 cally adjust the instance profile.
167
169 For each set of values for performance metrics returned via pmFetch(3)
170 there is an associated ``timestamp'' that serves to identify when the
171 performance metric values were collected; for metrics being delivered
172 from a real-time source (i.e. pmcd(1) on some host) this would typi‐
173 cally be not long before they were exported across the PMAPI, and for
174 metrics being delivered from an archive log, this would be the time
175 when the metrics were written into the archive log.
176
177 There is an issue here of exactly when individual metrics may have been
178 collected, especially given their origin in potentially different Per‐
179 formance Metric Domains, and variability in the metric updating fre‐
180 quency at the lowest level of the Performance Metric Domain. The PMCS
181 opts for the pragmatic approach, in which the PMAPI implementation
182 undertakes to return all of the metrics with values accurate as of the
183 timestamp, to the best of our ability. The belief is that the inaccu‐
184 racy this introduces is small, and the additional burden of accurate
185 individual timestamping for each returned metric value is neither war‐
186 ranted nor practical (from an implementation viewpoint).
187
188 Of course, in the case of collection of metrics from multiple hosts the
189 PMAPI client must assume the sanity of the timestamps is constrained by
190 the extent to which clock synchronization protocols are implemented
191 across the network.
192
193 A PMAPI application may call pmSetMode(3) to vary the requested collec‐
194 tion time, e.g. to rescan performance metrics values from the recent
195 past, or to ``fast-forward'' through an archive log.
196
198 Across the PMAPI, all arguments and results involving a ``list of some‐
199 thing'' are declared to be arrays with an associated argument or func‐
200 tion value to identify the number of elements in the list. This has
201 been done to avoid both the varargs(3) approach and sentinel-terminated
202 lists.
203
204 Where the size of a result is known at the time of a call, it is the
205 caller's responsibility to allocate (and possibly free) the storage,
206 and the called function will assume the result argument is of an appro‐
207 priate size. Where a result is of variable size and that size cannot
208 be known in advance (e.g. for pmGetChildren(3), pmGetInDom(3),
209 pmNameInDom(3), pmNameID(3), pmLookupText(3) and pmFetch(3)) the PMAPI
210 implementation uses a range of dynamic allocation schemes in the called
211 routine, with the caller responsible for subsequently releasing the
212 storage when no longer required. In some cases this simply involves
213 calls to free(3), but in others (most notably for the result from
214 pmFetch(3)), special routines (e.g. pmFreeResult(3)) should be used to
215 release the storage.
216
217 As a general rule, if the called routine returns an error status then
218 no allocation will have been done, and any pointer to a variable sized
219 result is undefined.
220
222 Where error conditions may arise, the functions that comprise the PMAPI
223 conform to a single, simple error notification scheme, as follows;
224
225 + the function returns an integer
226
227 + values >= 0 indicate no error, and perhaps some positive status,
228 e.g. the number of things really processed
229
230 + values < 0 indicate an error, with a global table of error condi‐
231 tions and error messages
232
233 The PMAPI routine pmErrStr(3) translates error conditions into error
234 messages. By convention, the small negative values are assumed to be
235 negated versions of the Unix error codes as defined in <errno.h> and
236 the strings returned are as per strerror(3). The larger, negative
237 error codes are PMAPI error conditions.
238
239 One error, common to all PMAPI routines that interact with pmcd(1) on
240 some host is PM_ERR_IPC, which indicates the communication link to
241 pmcd(1) has been lost.
242
244 The original design for PCP was based around single-threaded applica‐
245 tions, or more strictly applications in which only one thread was ever
246 expected to call the PCP libraries. This restriction has been relaxed
247 for libpcp to allow the most common PMAPI routines to be safely called
248 from any thread in a multi-threaded application.
249
250 However the following groups of functions and services in libpcp are
251 still restricted to being called from a single-thread, and this is
252 enforced by returning PM_ERR_THREAD when an attempt to call the rou‐
253 tines in each group from more than one thread is detected.
254
255 1. Any use of a PM_CONTEXT_LOCAL context, as the DSO PMDAs that are
256 called directly from libpcp may not be thread-safe.
257
258 2. The interval timer services use global state with semantics that
259 demand it is only used in the context of a single thread, so
260 __pmAFregister(3), __pmAFunregister(3), __pmAFblock(3), __pmAFun‐
261 block (3) and __pmAFisempty(3).
262
263 3. The following (undocumented) access control manipulation routines
264 that are principally intended for single-threaded applications:
265 __pmAccAddOp, __pmAccSaveHosts, __pmAccRestoreHosts, __pmAc‐
266 cFreeSavedHosts, __pmAccAddHost, __pmAccAddClient, __pmAccDelClient
267 and __pmAccDumpHosts.
268
269 4. The following (undocumented) routines that identify pmlogger con‐
270 trol ports and are principally intended for single-threaded appli‐
271 cations: __pmLogFindPort and __pmLogFindLocalPorts.
272
274 Most environment variables are described in PCPIntro(1). In addition,
275 environment variables with the prefix PCP_ are used to parameterize the
276 file and directory names used by PCP. On each installation, the file
277 /etc/pcp.conf contains the local values for these variables. The
278 $PCP_CONF variable may be used to specify an alternative configuration
279 file, as described in pcp.conf(5). Values for these variables may be
280 obtained programmatically using the pmGetConfig(3) function.
281
283 PCPIntro(1), PCPIntro(3), PMAPI(3), pmda(3), pmGetConfig(3),
284 pcp.conf(5) and pcp.env(5).
285
286
287
288Performance Co-Pilot PCP PMAPI(3)