1PMAPI(3)                   Library Functions Manual                   PMAPI(3)
2
3
4

NAME

6       PMAPI - introduction to the Performance Metrics Application Programming
7       Interface
8

C SYNOPSIS

10       #include <pcp/pmapi.h>
11
12        ... assorted routines ...
13
14       cc ... -lpcp
15

DESCRIPTION

17       Within the framework of the Performance Co-Pilot (PCP), client applica‐
18       tions  are developed using the Performance Metrics Application Program‐
19       ming Interface (PMAPI) that defines a procedural  interface  with  ser‐
20       vices  suited  to the development of applications with a particular in‐
21       terest in performance metrics.
22
23       This description presents an overview of the PMAPI and the  context  in
24       which PMAPI applications are run.  The PMAPI is more fully described in
25       the Performance Co-Pilot Programmer's Guide, and the manual  pages  for
26       the individual PMAPI routines.
27

PERFORMANCE METRICS - NAMES AND IDENTIFIERS

29       For  a description of the Performance Metrics Name Space (PMNS) and as‐
30       sociated terms and concepts, see PCPIntro(1).
31
32       Not all PMIDs need be represented in the  PMNS  of  every  application.
33       For example, an application which monitors disk traffic will likely use
34       a name space which references only the PMIDs for I/O statistics.
35
36       Applications which use the PMAPI may have  independent  versions  of  a
37       PMNS,  constructed  from  an  initialization  file when the application
38       starts; see pmLoadASCIINameSpace(3), pmLoadNameSpace(3), and PMNS(5).
39
40       Internally (below the PMAPI) the implementation of the Performance Met‐
41       rics  Collection System (PMCS) uses only the PMIDs, and a PMNS provides
42       an external mapping from a hierarchic taxonomy of names to  PMIDs  that
43       is  convenient  in the context of a particular system or particular use
44       of the PMAPI.  For the applications programmer, the routines  pmLookup‐
45       Name(3)  and  pmNameID(3)  translate between names in a PMNS and PMIDs,
46       and vice versa.  The  PMNS  may  be  traversed  using  pmGetChildren(3)
47       andpmTraversePMNS.   The  pmFetchGroup(3) functions combine metric name
48       lookup, fetch, and conversion operations.
49

PMAPI CONTEXT

51       An application using the PMAPI may manipulate several  concurrent  con‐
52       texts,  each  associated  with  a  source  of performance metrics, e.g.
53       pmcd(1) on some host, or a set of archives of  performance  metrics  as
54       created by pmlogger(1).
55
56       Contexts  are identified by a ``handle'', a small integer value that is
57       returned when the context is created; see pmNewContext(3) and pmDupCon‐
58       text(3).   Some PMAPI functions require an explicit ``handle'' to iden‐
59       tify the correct context, but more commonly the PMAPI function is  exe‐
60       cuted  in  the ``current'' context.  The current context may be discov‐
61       ered using pmWhichContext(3) and changed using pmUseContext(3).
62
63       If a PMAPI context has not been explicitly established (or the previous
64       current  context  has  been  closed using pmDestroyContext(3)) then the
65       current PMAPI context is undefined.
66
67       In addition to the source of the performance metrics, the context  also
68       includes  the  instance profile and collection time (both described be‐
69       low) which controls how much information is returned, and when the  in‐
70       formation was collected.
71

INSTANCE DOMAINS

73       When  performance  metric values are returned across the PMAPI to a re‐
74       questing application, there may be more than one value for a particular
75       metric.   Multiple  values, or instances, for a single metric are typi‐
76       cally the result of instrumentation being implemented for each instance
77       of  a set of similar components or services in a system, e.g.  indepen‐
78       dent counts for each CPU, or each process, or each disk, or each system
79       call  type,  etc.  This multiplicity of values is not enumerated in the
80       name space but rather, when performance metrics  are  delivered  across
81       the  PMAPI  by pmFetch(3), the format of the result accommodates values
82       for one or more instances, with an  instance-value  pair  encoding  the
83       metric value for a particular instance.
84
85       The  instances are identified by an internal identifier assigned by the
86       agent responsible for instantiating the values for the associated  per‐
87       formance metric.  Each instance identifier has a corresponding external
88       instance   identifier   name   (an   ASCII   string).    The   routines
89       pmGetInDom(3),  pmLookupInDom(3) and pmNameInDom(3) may be used to enu‐
90       merate all instance identifiers, and to translate between internal  and
91       external instance identifiers.
92
93       All of the instance identifiers for a particular performance metric are
94       collectively known as an instance domain.  Multiple performance metrics
95       may share the same instance domain.
96
97       If  only  one  instance  is ever available for a particular performance
98       metric, the instance identifier in the result from  pmFetch(3)  assumes
99       the special value PM_IN_NULL and may be ignored by the application, and
100       only one instance-value pair appears in the  result  for  that  metric.
101       Under  these circumstances, the associated instance domain (as returned
102       via pmLookupDesc(3)) is set to PM_INDOM_NULL to  indicate  that  values
103       for this metric are singular.
104
105       The difficult issue of transient performance metrics (e.g. per-filesys‐
106       tem information, hot-plug replaceable  hardware  modules,  etc.)  means
107       that  repeated  requests for the same PMID may return different numbers
108       of values, and/or some changes in the particular  instance  identifiers
109       returned.  This means applications need to be aware that metric instan‐
110       tiation is guaranteed to be valid at the time of collection only.  Sim‐
111       ilar  rules  apply  to the transient semantics of the associated metric
112       values.  In general however, it is expected that the bulk of  the  per‐
113       formance  metrics will have instantiation semantics that are fixed over
114       the execution life-time of any PMAPI client.
115

THE TYPE OF METRIC VALUES

117       The PMAPI supports a wide range of format and type  encodings  for  the
118       values  of  performance  metrics,  namely signed and unsigned integers,
119       floating point numbers, 32-bit and  64-bit  encodings  of  all  of  the
120       above, ASCII strings (C-style, NULL byte terminated), and arbitrary ag‐
121       gregates of binary data.
122
123       The type field in the  pmDesc  structure  returned  by  pmLookupDesc(3)
124       identifies  the  format and type of the values for a particular perfor‐
125       mance metric within a particular PMAPI context.
126
127       Note that the encoding of values for a  particular  performance  metric
128       may  be  different  for different PMAPI contexts, due to differences in
129       the underlying implementation for different contexts.   However  it  is
130       expected  that  the vast majority of performance metrics will have con‐
131       sistent value encoding across all versions of all implementations,  and
132       hence across all PMAPI contexts.
133
134       The  PMAPI  supports  routines  to automate the handling of the various
135       value formats and types, particularly for the common case where conver‐
136       sion  to  a  canonical format is desired, see pmExtractValue(3) and pm‐
137       PrintValue(3).
138

THE DIMENSIONALITY AND SCALE OF METRIC VALUES

140       Independent of how the value is encoded, the value  for  a  performance
141       metric  is  assumed  to  be  drawn from a set of values that can be de‐
142       scribed in terms of their dimensionality and scale by a compact  encod‐
143       ing as follows.  The dimensionality is defined by a power, or index, in
144       each of 3 orthogonal dimensions,  namely  Space,  Time  and  Count  (or
145       Events,  which are dimensionless).  For example I/O throughput might be
146       represented as Space/Time, while the running total of system  calls  is
147       Count,   memory  allocation  is  Space  and  average  service  time  is
148       Time/Count.  In each dimension there are a number of common scale  val‐
149       ues  that  may be used to better encode ranges that might otherwise ex‐
150       haust the precision of a 32-bit value.  This information is encoded  in
151       the  pmUnits  structure  which  is embedded in the pmDesc structure re‐
152       turned from pmLookupDesc(3).
153
154       The routine pmConvScale(3) is provided to convert values in conjunction
155       with  the  pmUnits structures that defines the dimensionality and scale
156       of the values for a particular performance metric as returned from  pm‐
157       Fetch(3),  and  the  desired  dimensionality and scale of the value the
158       PMAPI client wishes to manipulate.  Alternatively, the  pmFetchGroup(3)
159       functions can perform data format and unit conversion operations, spec‐
160       ified by textual descriptions of desired unit / scales.
161

INSTANCE PROFILE

163       The set of instances for performance metrics returned from a pmFetch(3)
164       call may be filtered or restricted using an instance profile.  There is
165       one instance profile for each PMAPI context  the  application  creates,
166       and  each  instance  profile may include instances from one or more in‐
167       stance domains.
168
169       The routines pmAddProfile(3) and pmDelProfile(3) may be used to dynami‐
170       cally adjust the instance profile.
171

COLLECTION TIME

173       For  each set of values for performance metrics returned via pmFetch(3)
174       there is an associated ``timestamp'' that serves to identify  when  the
175       performance  metric  values were collected; for metrics being delivered
176       from a real-time source (i.e. pmcd(1) on some host)  this  would  typi‐
177       cally  be  not long before they were exported across the PMAPI, and for
178       metrics being delivered from a set of archives, this would be the  time
179       when the metrics were written into the archive.
180
181       There is an issue here of exactly when individual metrics may have been
182       collected, especially given their origin in potentially different  Per‐
183       formance  Metric  Domains,  and variability in the metric updating fre‐
184       quency at the lowest level of the Performance Metric Domain.  The  PMCS
185       opts  for the pragmatic approach, in which the PMAPI implementation un‐
186       dertakes to return all of the metrics with values accurate  as  of  the
187       timestamp,  to the best of our ability.  The belief is that the inaccu‐
188       racy this introduces is small, and the additional  burden  of  accurate
189       individual  timestamping for each returned metric value is neither war‐
190       ranted nor practical (from an implementation viewpoint).
191
192       Of course, in the case of collection of metrics from multiple hosts the
193       PMAPI client must assume the sanity of the timestamps is constrained by
194       the extent to which clock  synchronization  protocols  are  implemented
195       across the network.
196
197       A PMAPI application may call pmSetMode(3) to vary the requested collec‐
198       tion time, e.g. to rescan performance metrics values  from  the  recent
199       past, or to ``fast-forward'' through a set of archives.
200

GENERAL ISSUES OF PMAPI PROGRAMMING STYLE

202       Across the PMAPI, all arguments and results involving a ``list of some‐
203       thing'' are declared to be arrays with an associated argument or  func‐
204       tion  value  to  identify the number of elements in the list.  This has
205       been done to avoid both the varargs(3) approach and sentinel-terminated
206       lists.
207
208       Where  the  size  of a result is known at the time of a call, it is the
209       caller's responsibility to allocate (and possibly  free)  the  storage,
210       and the called function will assume the result argument is of an appro‐
211       priate size.  Where a result is of variable size and that  size  cannot
212       be  known  in  advance  (e.g.  for pmGetChildren(3), pmGetInDom(3), pm‐
213       NameInDom(3), pmNameID(3), pmLookupLabels(3), pmLookupText(3)  and  pm‐
214       Fetch(3))  the  PMAPI implementation uses a range of dynamic allocation
215       schemes in the called routine, with the caller responsible  for  subse‐
216       quently  releasing  the storage when no longer required.  In some cases
217       this simply involves calls to free(3), but in others (most notably  for
218       the result from pmFetch(3)), special routines (e.g. pmFreeResult(3) and
219       pmFreeLabelSets(3)) should be used to release the storage.
220
221       As a general rule, if the called routine returns an error  status  then
222       no  allocation will have been done, and any pointer to a variable sized
223       result is undefined.
224

DIAGNOSTICS

226       Where error conditions may arise, the functions that comprise the PMAPI
227       conform to a single, simple error notification scheme, as follows;
228
229       +  the function returns an integer
230
231       +  values  >=  0  indicate  no error, and perhaps some positive status,
232          e.g. the number of things really processed
233
234       +  values < 0 indicate an error, with a global table  of  error  condi‐
235          tions and error messages
236
237       The  PMAPI  routine  pmErrStr(3) translates error conditions into error
238       messages.  By convention, the small negative values are assumed  to  be
239       negated  versions  of  the Unix error codes as defined in <errno.h> and
240       the strings returned are as per strerror(3).  The larger, negative  er‐
241       ror codes are PMAPI error conditions.
242
243       One  error,  common to all PMAPI routines that interact with pmcd(1) on
244       some host is PM_ERR_IPC, which  indicates  the  communication  link  to
245       pmcd(1) has been lost.
246

MULTI-THREADED APPLICATIONS

248       The  original  design for PCP was based around single-threaded applica‐
249       tions, or more strictly applications in which only one thread was  ever
250       expected  to call the PCP libraries.  This restriction has been relaxed
251       for libpcp to allow the most common PMAPI routines to be safely  called
252       from any thread in a multi-threaded application.
253
254       However  the  following  groups of functions and services in libpcp are
255       still restricted to being called from a single-thread, and this is  en‐
256       forced  by returning PM_ERR_THREAD when an attempt to call the routines
257       in each group from more than one thread is detected.
258
259       1.  Any use of a PM_CONTEXT_LOCAL context, as the DSO  PMDAs  that  are
260           called directly from libpcp may not be thread-safe.
261

PCP ENVIRONMENT

263       Most  environment variables are described in PCPIntro(1).  In addition,
264       environment variables with the prefix PCP_ are used to parameterize the
265       file  and  directory names used by PCP.  On each installation, the file
266       /etc/pcp.conf contains the  local  values  for  these  variables.   The
267       $PCP_CONF  variable may be used to specify an alternative configuration
268       file, as described in pcp.conf(5).  Values for these variables  may  be
269       obtained programmatically using the pmGetConfig(3) function.
270

SEE ALSO

272       PCPIntro(1),   PCPIntro(3),   PMDA(3),   PMWEBAPI(3),   pmGetConfig(3),
273       pcp.conf(5), pcp.env(5) and PMNS(5).
274
275
276
277Performance Co-Pilot                  PCP                             PMAPI(3)
Impressum