1PUNGI(1) Pungi PUNGI(1)
2
3
4
6 pungi - Pungi Documentation
7
8 Contents:
9
11 [image: Pungi Logo] [image]
12
13 Pungi is a distribution compose tool.
14
15 Composes are release snapshots that contain release deliverables such
16 as:
17
18 • installation trees
19
20 • RPMs
21
22 • repodata
23
24 • comps
25
26 • (bootable) ISOs
27
28 • kickstart trees
29
30 • anaconda images
31
32 • images for PXE boot
33
34 Tool overview
35 Pungi consists of multiple separate executables backed by a common li‐
36 brary.
37
38 The main entry-point is the pungi-koji script. It loads the compose
39 configuration and kicks off the process. Composing itself is done in
40 phases. Each phase is responsible for generating some artifacts on
41 disk and updating the compose object that is threaded through all the
42 phases.
43
44 Pungi itself does not actually do that much. Most of the actual work is
45 delegated to separate executables. Pungi just makes sure that all the
46 commands are invoked in the appropriate order and with correct argu‐
47 ments. It also moves the artifacts to correct locations.
48
49 The executable name pungi-koji comes from the fact that most of those
50 separate executables submit tasks to Koji that does the actual work in
51 an auditable way.
52
53 However unlike doing everything manually in Koji, Pungi will make sure
54 you are building all images from the same package set, and will produce
55 even deliverables that Koji can not create like YUM repos and installer
56 ISOs.
57
58 Links
59 • Upstream GIT: https://pagure.io/pungi/
60
61 • Issue tracker: https://pagure.io/pungi/issues
62
63 • Questions can be asked on #fedora-releng IRC channel on FreeNode
64
65 Origin of name
66 The name Pungi comes from the instrument used to charm snakes. Anaconda
67 being the software Pungi was manipulating, and anaconda being a snake,
68 led to the referential naming.
69
70 The first name, which was suggested by Seth Vidal, was FIST, Fedora In‐
71 stallation <Something> Tool. That name was quickly discarded and re‐
72 placed with Pungi.
73
74 There was also a bit of an inside joke that when said aloud, it could
75 sound like punji, which is a sharpened stick at the bottom of a trap.
76 Kind of like software…
77
79 Each invocation of pungi-koji consists of a set of phases. [image:
80 phase diagram] [image]
81
82 Most of the phases run sequentially (left-to-right in the diagram), but
83 there are use cases where multiple phases run in parallel. This happens
84 for phases whose main point is to wait for a Koji task to finish.
85
86 Init
87 The first phase to ever run. Can not be skipped. It prepares the comps
88 files for variants (by filtering out groups and packages that should
89 not be there). See Processing comps files for details about how this
90 is done.
91
92 Pkgset
93 This phase loads a set of packages that should be composed. It has two
94 separate results: it prepares repos with packages in work/ directory
95 (one per arch) for further processing, and it returns a data structure
96 with mapping of packages to architectures.
97
98 Buildinstall
99 Spawns a bunch of threads, each of which runs either lorax or buildin‐
100 stall command (the latter coming from anaconda package). The commands
101 create boot.iso and other boot configuration files. The image is fi‐
102 nally linked into the compose/ directory as netinstall media.
103
104 The created images are also needed for creating live media or other im‐
105 ages in later phases.
106
107 With lorax this phase runs one task per variant.arch combination. For
108 buildinstall command there is only one task per architecture and prod‐
109 uct.img should be used to customize the results.
110
111 Gather
112 This phase uses data collected by pkgset phase and figures out what
113 packages should be in each variant. The basic mapping can come from
114 comps file, a JSON mapping or additional_packages config option. This
115 inputs can then be enriched by adding all dependencies. See Gathering
116 packages for details.
117
118 Once the mapping is finalized, the packages are linked to appropriate
119 places and the rpms.json manifest is created.
120
121 ExtraFiles
122 This phase collects extra files from the configuration and copies them
123 to the compose directory. The files are described by a JSON file in the
124 compose subtree where the files are copied. This metadata is meant to
125 be distributed with the data (on ISO images).
126
127 Createrepo
128 This phase creates RPM repositories for each variant.arch tree. It is
129 actually reading the rpms.json manifest to figure out which packages
130 should be included.
131
132 OSTree
133 Updates an ostree repository with a new commit with packages from the
134 compose. The repository lives outside of the compose and is updated
135 immediately. If the compose fails in a later stage, the commit will not
136 be reverted.
137
138 Implementation wise, this phase runs rpm-ostree command in Koji runroot
139 (to allow running on different arches).
140
141 Createiso
142 Generates ISO files and accumulates enough metadata to be able to cre‐
143 ate image.json manifest. The file is however not created in this phase,
144 instead it is dumped in the pungi-koji script itself.
145
146 The files include a repository with all RPMs from the variant. There
147 will be multiple images if the packages do not fit on a single image.
148
149 The image will be bootable if buildinstall phase is enabled and the
150 packages fit on a single image.
151
152 There can also be images with source repositories. These are never
153 bootable.
154
155 ExtraIsos
156 This phase is very similar to createiso, except it combines content
157 from multiple variants onto a single image. Packages, repodata and ex‐
158 tra files from each configured variant are put into a subdirectory. Ad‐
159 ditional extra files can be put into top level of the image. The image
160 will be bootable if the main variant is bootable.
161
162 LiveImages, LiveMedia
163 Creates media in Koji with koji spin-livecd, koji spin-appliance or
164 koji spin-livemedia command. When the media are finished, the images
165 are copied into the compose/ directory and metadata for images is up‐
166 dated.
167
168 ImageBuild
169 This phase wraps up koji image-build. It also updates the metadata ul‐
170 timately responsible for images.json manifest.
171
172 OSBuild
173 Similarly to image build, this phases creates a koji osbuild task. In
174 the background it uses OSBuild Composer to create images.
175
176 OSBS
177 This phase builds container base images in OSBS.
178
179 The finished images are available in registry provided by OSBS, but not
180 downloaded directly into the compose. The is metadata about the created
181 image in compose/metadata/osbs.json.
182
183 ImageContainer
184 This phase builds a container image in OSBS, and stores the metadata in
185 the same file as OSBS phase. The container produced here wraps a dif‐
186 ferent image, created it ImageBuild or OSBuild phase. It can be useful
187 to deliver a VM image to containerized environments.
188
189 OSTreeInstaller
190 Creates bootable media that carry an ostree repository as a payload.
191 These images are created by running lorax with special templates. Again
192 it runs in Koji runroot.
193
194 Repoclosure
195 Run repoclosure on each repository. By default errors are only reported
196 in the log, the compose will still be considered a success. The actual
197 error has to be looked up in the compose logs directory. Configuration
198 allows customizing this.
199
200 ImageChecksum
201 Responsible for generating checksums for the images. The checksums are
202 stored in image manifest as well as files on disk. The list of images
203 to be processed is obtained from the image manifest. This way all im‐
204 ages will get the same checksums irrespective of the phase that created
205 them.
206
207 Test
208 This phase is supposed to run some sanity checks on the finished com‐
209 pose.
210
211 The only test is to check all images listed the metadata and verify
212 that they look sane. For ISO files headers are checked to verify the
213 format is correct, and for bootable media a check is run to verify they
214 have properties that allow booting.
215
217 The configuration file parser is provided by kobo
218
219 The file follows a Python-like format. It consists of a sequence of
220 variables that have a value assigned to them.
221
222 variable = value
223
224 The variable names must follow the same convention as Python code:
225 start with a letter and consist of letters, digits and underscores
226 only.
227
228 The values can be either an integer, float, boolean (True or False), a
229 string or None. Strings must be enclosed in either single or double
230 quotes.
231
232 Complex types are supported as well.
233
234 A list is enclosed in square brackets and items are separated with com‐
235 mas. There can be a comma after the last item as well.
236
237 a_list = [1,
238 2,
239 3,
240 ]
241
242 A tuple works like a list, but is enclosed in parenthesis.
243
244 a_tuple = (1, "one")
245
246 A dictionary is wrapped in brackets, and consists of key: value pairs
247 separated by commas. The keys can only be formed from basic types (int,
248 float, string).
249
250 a_dict = {
251 'foo': 'bar',
252 1: None
253 }
254
255 The value assigned to a variable can also be taken from another vari‐
256 able.
257
258 one = 1
259 another = one
260
261 Anything on a line after a # symbol is ignored and functions as a com‐
262 ment.
263
264 Importing other files
265 It is possible to include another configuration file. The files are
266 looked up relative to the currently processed file.
267
268 The general structure of import is:
269
270 from FILENAME import WHAT
271
272 The FILENAME should be just the base name of the file without extension
273 (which must be .conf). WHAT can either be a comma separated list of
274 variables or *.
275
276 # Opens constants.conf and brings PI and E into current scope.
277 from constants import PI, E
278
279 # Opens common.conf and brings everything defined in that file into current
280 # file as well.
281 from common import *
282
283 NOTE:
284 Pungi will copy the configuration file given on command line into
285 the logs/ directory. Only this single file will be copied, not any
286 included ones. (Copying included files requires a fix in kobo li‐
287 brary.)
288
289 The JSON-formatted dump of configuration is correct though.
290
291 Formatting strings
292 String interpolation is available as well. It uses a %-encoded format.
293 See Python documentation for more details.
294
295 joined = "%s %s" % (var_a, var_b)
296
297 a_dict = {
298 "fst": 1,
299 "snd": 2,
300 }
301 another = "%(fst)s %(snd)s" % a_dict
302
304 Please read productmd documentation for terminology and other release
305 and compose related details.
306
307 Minimal Config Example
308 # RELEASE
309 release_name = "Fedora"
310 release_short = "Fedora"
311 release_version = "23"
312
313 # GENERAL SETTINGS
314 comps_file = "comps-f23.xml"
315 variants_file = "variants-f23.xml"
316
317 # KOJI
318 koji_profile = "koji"
319 runroot = False
320
321 # PKGSET
322 sigkeys = [None]
323 pkgset_source = "koji"
324 pkgset_koji_tag = "f23"
325
326 # GATHER
327 gather_method = "deps"
328 greedy_method = "build"
329 check_deps = False
330
331 # BUILDINSTALL
332 buildinstall_method = "lorax"
333
334 Release
335 Following mandatory options describe a release.
336
337 Options
338 release_name [mandatory]
339 (str) – release name
340
341 release_short [mandatory]
342 (str) – release short name, without spaces and special charac‐
343 ters
344
345 release_version [mandatory]
346 (str) – release version
347
348 release_type = “ga” (str) – release type, for example ga,
349 updates or updates-testing. See list of all valid values in pro‐
350 ductmd documentation.
351
352 release_internal = False
353 (bool) – whether the compose is meant for public consumption
354
355 treeinfo_version
356 (str) Version to display in .treeinfo files. If not configured,
357 the value from release_version will be used.
358
359 Example
360 release_name = "Fedora"
361 release_short = "Fedora"
362 release_version = "23"
363 # release_type = "ga"
364
365 Base Product
366 Base product options are optional and we need to them only if we’re
367 composing a layered product built on another (base) product.
368
369 Options
370 base_product_name
371 (str) – base product name
372
373 base_product_short
374 (str) – base product short name, without spaces and special
375 characters
376
377 base_product_version
378 (str) – base product major version
379
380 base_product_type = “ga”
381 (str) – base product type, “ga”, “updates” etc., for full list
382 see documentation of productmd.
383
384 Example
385 release_name = "RPM Fusion"
386 release_short = "rf"
387 release_version = "23.0"
388
389 base_product_name = "Fedora"
390 base_product_short = "Fedora"
391 base_product_version = "23"
392
393 General Settings
394 Options
395 comps_file [mandatory]
396 (scm_dict, str or None) – reference to comps XML file with in‐
397 stallation groups
398
399 variants_file [mandatory]
400 (scm_dict or str) – reference to variants XML file that defines
401 release variants and architectures
402
403 module_defaults_dir [optional]
404 (scm_dict or str) – reference the module defaults directory con‐
405 taining modulemd-defaults YAML documents. Files relevant for
406 modules included in the compose will be embedded in the gener‐
407 ated repodata and available for DNF.
408
409 module_defaults_dir = {
410 "scm": "git",
411 "repo": "https://pagure.io/releng/fedora-module-defaults.git",
412 "dir": ".",
413 }
414
415 failable_deliverables [optional]
416 (list) – list which deliverables on which variant and architec‐
417 ture can fail and not abort the whole compose. This only applies
418 to buildinstall and iso parts. All other artifacts can be con‐
419 figured in their respective part of configuration.
420
421 Please note that * as a wildcard matches all architectures but
422 src.
423
424 comps_filter_environments [optional]
425 (bool) – When set to False, the comps files for variants will
426 not have their environments filtered to match the variant.
427
428 tree_arches
429 ([str]) – list of architectures which should be included; if un‐
430 defined, all architectures from variants.xml will be included
431
432 tree_variants
433 ([str]) – list of variants which should be included; if unde‐
434 fined, all variants from variants.xml will be included
435
436 repoclosure_strictness
437 (list) – variant/arch mapping describing how repoclosure should
438 run. Possible values are
439
440 • off – do not run repoclosure
441
442 • lenient – (default) run repoclosure and write results to
443 logs, but detected errors are only reported in logs
444
445 • fatal – abort compose when any issue is detected
446
447 When multiple blocks in the mapping match a variant/arch combi‐
448 nation, the last value will win.
449
450 repoclosure_backend
451 (str) – Select which tool should be used to run repoclosure over
452 created repositories. By default yum is used, but you can switch
453 to dnf. Please note that when dnf is used, the build dependen‐
454 cies check is skipped. On Python 3, only dnf backend is avail‐
455 able.
456
457 See also: the gather_backend setting for Pungi’s gather phase.
458
459 cts_url
460 (str) – URL to Compose Tracking Service. If defined, Pungi will
461 add the compose to Compose Tracking Service and ge the compose
462 ID from it. For example https://cts.localhost.tld/
463
464 cts_keytab
465 (str) – Path to Kerberos keytab which will be used for Compose
466 Tracking Service Kerberos authentication. If not defined, the
467 default Kerberos principal is used.
468
469 cts_oidc_token_url
470 (str) – URL to the OIDC token endpoint. For example
471 https://oidc.example.com/openid-connect/token. This option can
472 be overridden by the environment variable CTS_OIDC_TOKEN_URL.
473
474 **
475 cts_oidc_client_id* (str) – OIDC client ID. This option can be
476 overridden by the environment variable CTS_OIDC_CLIENT_ID. Note
477 that environment variable CTS_OIDC_CLIENT_SECRET must be config‐
478 ured with corresponding client secret to authenticate to CTS via
479 OIDC.
480
481 compose_type
482 (str) – Allows to set default compose type. Type set via a com‐
483 mand-line option overwrites this.
484
485 mbs_api_url
486 (str) – URL to Module Build Service (MBS) API. For example
487 https://mbs.example.com/module-build-service/2. This is re‐
488 quired by pkgset_scratch_modules.
489
490 Example
491 comps_file = {
492 "scm": "git",
493 "repo": "https://git.fedorahosted.org/git/comps.git",
494 "branch": None,
495 "file": "comps-f23.xml.in",
496 }
497
498 variants_file = {
499 "scm": "git",
500 "repo": "https://pagure.io/pungi-fedora.git ",
501 "branch": None,
502 "file": "variants-fedora.xml",
503 }
504
505 failable_deliverables = [
506 ('^.*$', {
507 # Buildinstall can fail on any variant and any arch
508 '*': ['buildinstall'],
509 'src': ['buildinstall'],
510 # Nothing on i386 blocks the compose
511 'i386': ['buildinstall', 'iso', 'live'],
512 })
513 ]
514
515 tree_arches = ["x86_64"]
516 tree_variants = ["Server"]
517
518 repoclosure_strictness = [
519 # Make repoclosure failures fatal for compose on all variants …
520 ('^.*$', {'*': 'fatal'}),
521 # … except for Everything where it should not run at all.
522 ('^Everything$', {'*': 'off'})
523 ]
524
525 Image Naming
526 Both image name and volume id are generated based on the configuration.
527 Since the volume id is limited to 32 characters, there are more set‐
528 tings available. The process for generating volume id is to get a list
529 of possible formats and try them sequentially until one fits in the
530 length limit. If substitutions are configured, each attempted volume id
531 will be modified by it.
532
533 For layered products, the candidate formats are first image_volid_lay‐
534 ered_product_formats followed by image_volid_formats. Otherwise, only
535 image_volid_formats are tried.
536
537 If no format matches the length limit, an error will be reported and
538 compose aborted.
539
540 Options
541 There a couple common format specifiers available for both the options:
542
543 • compose_id
544
545 • release_short
546
547 • version
548
549 • date
550
551 • respin
552
553 • type
554
555 • type_suffix
556
557 • label
558
559 • label_major_version
560
561 • variant
562
563 • arch
564
565 • disc_type
566
567 image_name_format [optional]
568 (str|dict) – Python’s format string to serve as template for im‐
569 age names. The value can also be a dict mapping variant UID
570 regexes to the format string. The pattern should not overlap,
571 otherwise it is undefined which one will be used.
572
573 This format will be used for all phases generating images. Cur‐
574 rently that means createiso, live_images and buildinstall.
575
576 Available extra keys are:
577
578 • disc_num
579
580 • suffix
581
582 image_volid_formats [optional]
583 (list) – A list of format strings for generating volume id.
584
585 The extra available keys are:
586
587 • base_product_short
588
589 • base_product_version
590
591 image_volid_layered_product_formats [optional]
592 (list) – A list of format strings for generating volume id for
593 layered products. The keys available are the same as for im‐
594 age_volid_formats.
595
596 restricted_volid = False
597 (bool) – New versions of lorax replace all non-alphanumerical
598 characters with dashes (underscores are preserved). This option
599 will mimic similar behaviour in Pungi.
600
601 volume_id_substitutions [optional]
602 (dict) – A mapping of string replacements to shorten the volume
603 id.
604
605 disc_types [optional]
606 (dict) – A mapping for customizing disc_type used in image
607 names.
608
609 Available keys are:
610
611 • boot – for boot.iso images created in buildinstall
612 phase
613
614 • live – for images created by live_images phase
615
616 • dvd – for images created by createiso phase
617
618 • ostree – for ostree installer images
619
620 Default values are the same as the keys.
621
622 Example
623 # Image name respecting Fedora's image naming policy
624 image_name_format = "%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s%(suffix)s"
625 # Use the same format for volume id
626 image_volid_formats = [
627 "%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s"
628 ]
629 # No special handling for layered products, use same format as for regular images
630 image_volid_layered_product_formats = []
631 # Replace "Cloud" with "C" in volume id etc.
632 volume_id_substitutions = {
633 'Cloud': 'C',
634 'Alpha': 'A',
635 'Beta': 'B',
636 'TC': 'T',
637 }
638
639 disc_types = {
640 'boot': 'netinst',
641 'live': 'Live',
642 'dvd': 'DVD',
643 }
644
645 Signing
646 If you want to sign deliverables generated during pungi run like RPM
647 wrapped images. You must provide few configuration options:
648
649 signing_command [optional]
650 (str) – Command that will be run with a koji build as a single
651 argument. This command must not require any user interaction.
652 If you need to pass a password for a signing key to the command,
653 do this via command line option of the command and use string
654 formatting syntax %(signing_key_password)s. (See sign‐
655 ing_key_password_file).
656
657 signing_key_id [optional]
658 (str) – ID of the key that will be used for the signing. This
659 ID will be used when crafting koji paths to signed files (kojip‐
660 kgs.fedoraproject.org/pack‐
661 ages/NAME/VER/REL/data/signed/KEYID/..).
662
663 signing_key_password_file [optional]
664 (str) – Path to a file with password that will be formatted into
665 signing_command string via %(signing_key_password)s string for‐
666 mat syntax (if used). Because pungi config is usually stored in
667 git and is part of compose logs we don’t want password to be in‐
668 cluded directly in the config. Note: If - string is used in‐
669 stead of a filename, then you will be asked for the password in‐
670 teractivelly right after pungi starts.
671
672 Example
673 signing_command = '~/git/releng/scripts/sigulsign_unsigned.py -vv --password=%(signing_key_password)s fedora-24'
674 signing_key_id = '81b46521'
675 signing_key_password_file = '~/password_for_fedora-24_key'
676
677 Git URLs
678 In multiple places the config requires URL of a Git repository to down‐
679 load some file from. This URL is passed on to Koji. It is possible to
680 specify which commit to use using this syntax:
681
682 git://git.example.com/git/repo-name.git?#<rev_spec>
683
684 The <rev_spec> pattern can be replaced with actual commit SHA, a tag
685 name, HEAD to indicate that tip of default branch should be used or
686 origin/<branch_name> to use tip of arbitrary branch.
687
688 If the URL specifies a branch or HEAD, Pungi will replace it with the
689 actual commit SHA. This will later show up in Koji tasks and help with
690 tracing what particular inputs were used.
691
692 NOTE:
693 The origin must be specified because of the way Koji works with the
694 repository. It will clone the repository then switch to requested
695 state with git reset --hard REF. Since no local branches are cre‐
696 ated, we need to use full specification including the name of the
697 remote.
698
699 Createrepo Settings
700 Options
701 createrepo_checksum
702 (str) – specify checksum type for createrepo; expected values:
703 sha512, sha256, sha1. Defaults to sha256.
704
705 createrepo_c = True
706 (bool) – use createrepo_c (True) or legacy createrepo (False)
707
708 createrepo_deltas = False
709 (list) – generate delta RPMs against an older compose. This
710 needs to be used together with --old-composes command line argu‐
711 ment. The value should be a mapping of variants and architec‐
712 tures that should enable creating delta RPMs. Source and debug‐
713 info repos never have deltas.
714
715 createrepo_use_xz = False
716 (bool) – whether to pass --xz to the createrepo command. This
717 will cause the SQLite databases to be compressed with xz.
718
719 createrepo_num_threads
720 (int) – how many concurrent createrepo process to run. The de‐
721 fault is to use one thread per CPU available on the machine.
722
723 createrepo_num_workers
724 (int) – how many concurrent createrepo workers to run. Value de‐
725 faults to 3.
726
727 createrepo_database
728 (bool) – whether to create SQLite database as part of the repo‐
729 data. This is only useful as an optimization for clients using
730 Yum to consume to the repo. Default value depends on gather
731 backend. For DNF it’s turned off, for Yum the default is True.
732
733 createrepo_extra_args
734 ([str]) – a list of extra arguments passed on to createrepo or
735 createrepo_c executable. This could be useful for enabling
736 zchunk generation and pointing it to correct dictionaries.
737
738 createrepo_extra_modulemd
739 (dict) – a mapping of variant UID to an scm dict. If specified,
740 it should point to a directory with extra module metadata YAML
741 files that will be added to the repository for this variant. The
742 cloned files should be split into subdirectories for each archi‐
743 tecture of the variant.
744
745 createrepo_enable_cache = True
746 (bool) – whether to use --cachedir option of createrepo. It will
747 cache and reuse checksum vaules to speed up createrepo phase.
748 The cache dir is located at /var/cache/pungi/createrepo_c/$re‐
749 lease_short-$uid e.g. /var/cache/pungi/createrepo_c/Fedora-1000
750
751 product_id = None
752 (scm_dict) – If specified, it should point to a directory with
753 certificates *<variant_uid>-<arch>-*.pem. Pungi will copy each
754 certificate file into the relevant Yum repositories as a produc‐
755 tid file in the repodata directories. The purpose of these pro‐
756 ductid files is to expose the product data to
757 subscription-manager. subscription-manager includes a “prod‐
758 uct-id” Yum plugin that can read these productid certificate
759 files from each Yum repository.
760
761 product_id_allow_missing = False
762 (bool) – When product_id is used and a certificate for some
763 variant and architecture is missing, Pungi will exit with an er‐
764 ror by default. When you set this option to True, Pungi will
765 ignore the missing certificate and simply log a warning message.
766
767 product_id_allow_name_prefix = True
768 (bool) – Allow arbitrary prefix for the certificate file name
769 (see leading * in the pattern above). Setting this option to
770 False will make the pattern more strict by requiring the file
771 name to start directly with variant name.
772
773 Example
774 createrepo_checksum = "sha"
775 createrepo_deltas = [
776 # All arches for Everything should have deltas.
777 ('^Everything$', {'*': True}),
778 # Also Server.x86_64 should have them (but not on other arches).
779 ('^Server$', {'x86_64': True}),
780 ]
781 createrepo_extra_modulemd = {
782 "Server": {
783 "scm": "git",
784 "repo": "https://example.com/extra-server-modulemd.git",
785 "dir": ".",
786 # The directory should have this layout. Each architecture for the
787 # variant should be included (even if the directory is empty.
788 # .
789 # ├── aarch64
790 # │ ├── some-file.yaml
791 # │ └ ...
792 # └── x86_64
793 }
794 }
795
796 Package Set Settings
797 Options
798 sigkeys
799 ([str or None]) – priority list of signing key IDs. These key
800 IDs match the key IDs for the builds in Koji. Pungi will choose
801 signed packages according to the order of the key IDs that you
802 specify here. Use one single key in this list to ensure that all
803 RPMs are signed by one key. If the list includes an empty string
804 or None, Pungi will allow unsigned packages. If the list only
805 includes None, Pungi will use all unsigned packages.
806
807 pkgset_source [mandatory]
808 (str) – “koji” (any koji instance) or “repos” (arbitrary yum
809 repositories)
810
811 pkgset_koji_tag
812 (str|[str]) – tag(s) to read package set from. This option can
813 be omitted for modular composes.
814
815 pkgset_koji_builds
816 (str|[str]) – extra build(s) to include in a package set defined
817 as NVRs.
818
819 pkgset_koji_scratch_tasks
820 (str|[str]) – RPM scratch build task(s) to include in a package
821 set, defined as task IDs. This option can be used only when com‐
822 pose_type is set to test. The RPM still needs to have higher NVR
823 than any other RPM with the same name coming from other sources
824 in order to appear in the resulting compose.
825
826 pkgset_koji_module_tag
827 (str|[str]) – tags to read module from. This option works simi‐
828 larly to listing tags in variants XML. If tags are specified and
829 variants XML specifies some modules via NSVC (or part of), only
830 modules matching that list will be used (and taken from the
831 tag). Inheritance is used automatically.
832
833 pkgset_koji_module_builds
834 (dict) – A mapping of variants to extra module builds to include
835 in a package set: {variant: [N:S:V:C]}.
836
837 pkgset_koji_inherit = True
838 (bool) – inherit builds from parent tags; we can turn it off
839 only if we have all builds tagged in a single tag
840
841 pkgset_koji_inherit_modules = False
842 (bool) – the same as above, but this only applies to modular
843 tags. This option applies to the content tags that contain the
844 RPMs.
845
846 pkgset_repos
847 (dict) – A mapping of architectures to repositories with RPMs:
848 {arch: [repo]}. Only use when pkgset_source = "repos".
849
850 pkgset_scratch_modules
851 (dict) – A mapping of variants to scratch module builds: {vari‐
852 ant: [N:S:V:C]}. Requires mbs_api_url.
853
854 pkgset_exclusive_arch_considers_noarch = True
855 (bool) – If a package includes noarch in its ExclusiveArch tag,
856 it will be included in all architectures since noarch is compat‐
857 ible with everything. Set this option to False to ignore noarch
858 in ExclusiveArch and always consider only binary architectures.
859
860 pkgset_inherit_exclusive_arch_to_noarch = True
861 (bool) – When set to True, the value of ExclusiveArch or Ex‐
862 cludeArch will be copied from source rpm to all its noarch pack‐
863 ages. That will than limit which architectures the noarch pack‐
864 ages can be included in.
865
866 By setting this option to False this step is skipped, and noarch
867 packages will by default land in all architectures. They can
868 still be excluded by listing them in a relevant section of fil‐
869 ter_packages.
870
871 pkgset_allow_reuse = True
872 (bool) – When set to True, Pungi will try to reuse pkgset data
873 from the old composes specified by --old-composes. When enabled,
874 this option can speed up new composes because it does not need
875 to calculate the pkgset data from Koji. However, if you block or
876 unblock a package in Koji (for example) between composes, then
877 Pungi may not respect those changes in your new compose.
878
879 signed_packages_retries = 0
880 (int) – In automated workflows, you might start a compose before
881 Koji has completely written all signed packages to disk. In this
882 case you may want Pungi to wait for the package to appear in
883 Koji’s storage. This option controls how many times Pungi will
884 retry looking for the signed copy.
885
886 signed_packages_wait = 30
887 (int) – Interval in seconds for how long to wait between at‐
888 tempts to find signed packages. This option only makes sense
889 when signed_packages_retries is set higher than 0.
890
891 Example
892 sigkeys = [None]
893 pkgset_source = "koji"
894 pkgset_koji_tag = "f23"
895
896 Buildinstall Settings
897 Script or process that creates bootable images with Anaconda installer
898 is historically called buildinstall.
899
900 Options
901 buildinstall_method
902 (str) – “lorax” (f16+, rhel7+) or “buildinstall” (older re‐
903 leases)
904
905 lorax_options
906 (list) – special options passed on to lorax.
907
908 Format: [(variant_uid_regex, {arch|*: {option: name}})].
909
910 Recognized options are:
911
912 • bugurl – str (default None)
913
914 • nomacboot – bool (default True)
915
916 • noupgrade – bool (default True)
917
918 • add_template – [str] (default empty)
919
920 • add_arch_template – [str] (default empty)
921
922 • add_template_var – [str] (default empty)
923
924 • add_arch_template_var – [str] (default empty)
925
926 • rootfs_size – [int] (default empty)
927
928 • version – [str] (default from treeinfo_version or re‐
929 lease_version) – used as --version and --release argu‐
930 ment on the lorax command line
931
932 • dracut_args – [[str]] (default empty) override argu‐
933 ments for dracut. Please note that if this option is
934 used, lorax will not use any other arguments, so you
935 have to provide a full list and can not just add some‐
936 thing.
937
938 • skip_branding – bool (default False)
939
940 • squashfs_only – bool (default False) pass the
941 –squashfs_only to Lorax.
942
943 • configuration_file – (scm_dict) (default empty) pass
944 the specified configuration file to Lorax using the -c
945 option.
946
947 lorax_extra_sources
948 (list) – a variant/arch mapping with urls for extra source
949 repositories added to Lorax command line. Either one repo or a
950 list can be specified.
951
952 lorax_use_koji_plugin = False
953 (bool) – When set to True, the Koji pungi_buildinstall task will
954 be used to execute Lorax instead of runroot. Use only if the
955 Koji instance has the pungi_buildinstall plugin installed.
956
957 buildinstall_kickstart
958 (scm_dict) – If specified, this kickstart file will be copied
959 into each file and pointed to in boot configuration.
960
961 buildinstall_topdir
962 (str) – Full path to top directory where the runroot buildin‐
963 stall Koji tasks output should be stored. This is useful in sit‐
964 uation when the Pungi compose is not generated on the same stor‐
965 age as the Koji task is running on. In this case, Pungi can pro‐
966 vide input repository for runroot task using HTTP and set the
967 output directory for this task to buildinstall_topdir. Once the
968 runroot task finishes, Pungi will copy the results of runroot
969 tasks to the compose working directory.
970
971 buildinstall_skip
972 (list) – mapping that defines which variants and arches to skip
973 during buildinstall; format: [(variant_uid_regex, {arch|*:
974 True})]. This is only supported for lorax.
975
976 buildinstall_allow_reuse = False
977 (bool) – When set to True, Pungi will try to reuse buildinstall
978 results from old compose specified by --old-composes.
979
980 buildinstall_packages
981 (list) – Additional packages to be installed in the runroot en‐
982 vironment where lorax will run to create installer. Format:
983 [(variant_uid_regex, {arch|*: [package_globs]})].
984
985 Example
986 buildinstall_method = "lorax"
987
988 # Enables macboot on x86_64 for all variants and builds upgrade images
989 # everywhere.
990 lorax_options = [
991 ("^.*$", {
992 "x86_64": {
993 "nomacboot": False
994 }
995 "*": {
996 "noupgrade": False
997 }
998 })
999 ]
1000
1001 # Don't run buildinstall phase for Modular variant
1002 buildinstall_skip = [
1003 ('^Modular', {
1004 '*': True
1005 })
1006 ]
1007
1008 # Add another repository for lorax to install packages from
1009 lorax_extra_sources = [
1010 ('^Simple$', {
1011 '*': 'https://example.com/repo/$basearch/',
1012 })
1013 ]
1014
1015 # Additional packages to be installed in the Koji runroot environment where
1016 # lorax will run.
1017 buildinstall_packages = [
1018 ('^Simple$', {
1019 '*': ['dummy-package'],
1020 })
1021 ]
1022
1023 NOTE:
1024 It is advised to run buildinstall (lorax) in koji, i.e. with runroot
1025 enabled for clean build environments, better logging, etc.
1026
1027 WARNING:
1028 Lorax installs RPMs into a chroot. This involves running %post
1029 scriptlets and they frequently run executables in the chroot. If
1030 we’re composing for multiple architectures, we must use runroot for
1031 this reason.
1032
1033 Gather Settings
1034 Options
1035 gather_method [mandatory]
1036 (str*|*dict) – Options are deps, nodeps and hybrid. Specifies
1037 whether and how package dependencies should be pulled in. Pos‐
1038 sible configuration can be one value for all variants, or if
1039 configured per-variant it can be a simple string hybrid or a a
1040 dictionary mapping source type to a value of deps or nodeps.
1041 Make sure only one regex matches each variant, as there is no
1042 guarantee which value will be used if there are multiple match‐
1043 ing ones. All used sources must have a configured method unless
1044 hybrid solving is used.
1045
1046 gather_fulltree = False
1047 (bool) – When set to True all RPMs built from an SRPM will al‐
1048 ways be included. Only use when gather_method = "deps".
1049
1050 gather_selfhosting = False
1051 (bool) – When set to True, Pungi will build a self-hosting tree
1052 by following build dependencies. Only use when gather_method =
1053 "deps".
1054
1055 gather_allow_reuse = False
1056 (bool) – When set to True, Pungi will try to reuse gather re‐
1057 sults from old compose specified by --old-composes.
1058
1059 greedy_method = none
1060 (str) – This option controls how package requirements are satis‐
1061 fied in case a particular Requires has multiple candidates.
1062
1063 • none – the best packages is selected to satisfy the dependency
1064 and only that one is pulled into the compose
1065
1066 • all – packages that provide the symbol are pulled in
1067
1068 • build – the best package is selected, and then all packages
1069 from the same build that provide the symbol are pulled in
1070
1071 NOTE:
1072 As an example let’s work with this situation: a package in
1073 the compose has Requires: foo. There are three packages with
1074 Provides: foo: pkg-a, pkg-b-provider-1 and pkg-b-provider-2.
1075 The pkg-b-* packages are build from the same source package.
1076 Best match determines pkg-b-provider-1 as best matching pack‐
1077 age.
1078
1079 • With greedy_method = "none" only pkg-b-provider-1 will be
1080 pulled in.
1081
1082 • With greedy_method = "all" all three packages will be
1083 pulled in.
1084
1085 • With greedy_method = "build" pkg-b-provider-1 and
1086 pkg-b-provider-2 will be pulled in.
1087
1088 gather_backend
1089 (str) –This changes the entire codebase doing dependency solv‐
1090 ing, so it can change the result in unpredictable ways.
1091
1092 On Python 2, the choice is between yum or dnf and defaults to
1093 yum. On Python 3 dnf is the only option and default.
1094
1095 Particularly the multilib work is performed differently by using
1096 python-multilib library. Please refer to multilib option to see
1097 the differences.
1098
1099 See also: the repoclosure_backend setting for Pungi’s repoclo‐
1100 sure phase.
1101
1102 multilib
1103 (list) – mapping of variant regexes and arches to list of multi‐
1104 lib methods
1105
1106 Available methods are:
1107
1108 • none – no package matches this method
1109
1110 • all – all packages match this method
1111
1112 • runtime – packages that install some shared object file
1113 (*.so.*) will match.
1114
1115 • devel – packages whose name ends with -devel or
1116 --static suffix will be matched. When dnf is used, this
1117 method automatically enables runtime method as well.
1118 With yum backend this method also uses a hardcoded
1119 blacklist and whitelist.
1120
1121 • kernel – packages providing kernel or kernel-devel
1122 match this method (only in yum backend)
1123
1124 • yaboot – only yaboot package on ppc arch matches this
1125 (only in yum backend)
1126
1127 additional_packages
1128 (list) – additional packages to be included in a variant and ar‐
1129 chitecture; format: [(variant_uid_regex, {arch|*: [pack‐
1130 age_globs]})]
1131
1132 In contrast to the comps_file setting, the additional_packages
1133 setting merely adds the list of packages to the compose. When a
1134 package is in a comps group, it is visible to users via dnf
1135 groupinstall and Anaconda’s Groups selection, but addi‐
1136 tional_packages does not affect DNF groups.
1137
1138 The packages specified here are matched against RPM names, not
1139 any other provides in the package nor the name of source pack‐
1140 age. Shell globbing is used, so wildcards are possible. The
1141 package can be specified as name only or name.arch.
1142
1143 With dnf gathering backend, you can specify a debuginfo package
1144 to be included. This is meant to include a package if autodetec‐
1145 tion does not get it. If you add a debuginfo package that does
1146 not have anything else from the same build included in the com‐
1147 pose, the sources will not be pulled in.
1148
1149 If you list a package in additional_packages but Pungi cannot
1150 find it (for example, it’s not available in the Koji tag), Pungi
1151 will log a warning in the “work” or “logs” directories and con‐
1152 tinue without aborting.
1153
1154 Example: This configuration will add all packages in a Koji tag
1155 to an “Everything” variant:
1156
1157 additional_packages = [
1158 ('^Everything$', {
1159 '*': [
1160 '*',
1161 ],
1162 })
1163 ]
1164
1165 filter_packages
1166 (list) – packages to be excluded from a variant and architec‐
1167 ture; format: [(variant_uid_regex, {arch|*: [package_globs]})]
1168
1169 See additional_packages for details about package specification.
1170
1171 filter_modules
1172 (list) – modules to be excluded from a variant and architecture;
1173 format: [(variant_uid_regex, {arch|*: [name:stream]})]
1174
1175 Both name and stream can use shell-style globs. If stream is
1176 omitted, all streams are removed.
1177
1178 This option only applies to modules taken from Koji tags, not
1179 modules explicitly listed in variants XML without any tags.
1180
1181 filter_system_release_packages
1182 (bool) – for each variant, figure out the best system release
1183 package and filter out all others. This will not work if a vari‐
1184 ant needs more than one system release package. In such case,
1185 set this option to False.
1186
1187 gather_prepopulate = None
1188 (scm_dict) – If specified, you can use this to add additional
1189 packages. The format of the file pointed to by this option is a
1190 JSON mapping {variant_uid: {arch: {build: [package]}}}. Packages
1191 added through this option can not be removed by filter_packages.
1192
1193 multilib_blacklist
1194 (dict) – multilib blacklist; format: {arch|*: [package_globs]}.
1195
1196 See additional_packages for details about package specification.
1197
1198 multilib_whitelist
1199 (dict) – multilib blacklist; format: {arch|*: [package_names]}.
1200 The whitelist must contain exact package names; there are no
1201 wildcards or pattern matching.
1202
1203 gather_lookaside_repos = []
1204 (list) – lookaside repositories used for package gathering; for‐
1205 mat: [(variant_uid_regex, {arch|*: [repo_urls]})]
1206
1207 The repo_urls are passed to the depsolver, which can use pack‐
1208 ages in the repos for satisfying dependencies, but the packages
1209 themselves are not pulled into the compose. The repo_urls can
1210 contain $basearch variable, which will be substituted with
1211 proper value by the depsolver.
1212
1213 The repo_urls are used by repoclosure too, but it can’t parse
1214 $basearch currently and that will cause Repoclosure phase
1215 crashed. repoclosure_strictness option could be used to stop
1216 running repoclosure.
1217
1218 Please note that * as a wildcard matches all architectures but
1219 src.
1220
1221 hashed_directories = False
1222 (bool) – put packages into “hashed” directories, for example
1223 Packages/k/kernel-4.0.4-301.fc22.x86_64.rpm
1224
1225 check_deps = True
1226 (bool) – Set to False if you don’t want the compose to abort
1227 when some package has broken dependencies.
1228
1229 require_all_comps_packages = False
1230 (bool) – Set to True to abort compose when package mentioned in
1231 comps file can not be found in the package set. When disabled
1232 (the default), such cases are still reported as warnings in the
1233 log.
1234
1235 With dnf gather backend, this option will abort the compose on
1236 any missing package no matter if it’s listed in comps, addi‐
1237 tional_packages or prepopulate file.
1238
1239 gather_source_mapping
1240 (str) – JSON mapping with initial packages for the compose. The
1241 value should be a path to JSON file with following mapping:
1242 {variant: {arch: {rpm_name: [rpm_arch|None]}}}. Relative paths
1243 are interpreted relative to the location of main config file.
1244
1245 gather_profiler = False
1246 (bool) – When set to True the gather tool will produce addi‐
1247 tional performance profiling information at the end of its logs.
1248 Only takes effect when gather_backend = "dnf".
1249
1250 variant_as_lookaside
1251 (list) – a variant/variant mapping that tells one or more vari‐
1252 ants in compose has other variant(s) in compose as a lookaside.
1253 Only top level variants are supported (not addons/layered prod‐
1254 ucts). Format: [(variant_uid, variant_uid)]
1255
1256 Example
1257 gather_method = "deps"
1258 greedy_method = "build"
1259 check_deps = False
1260 hashed_directories = True
1261
1262 gather_method = {
1263 "^Everything$": {
1264 "comps": "deps" # traditional content defined by comps groups
1265 },
1266 "^Modular$": {
1267 "module": "nodeps" # Modules do not need dependencies
1268 },
1269 "^Mixed$": { # Mixed content in one variant
1270 "comps": "deps",
1271 "module": "nodeps"
1272 }
1273 "^OtherMixed$": "hybrid", # Using hybrid depsolver
1274 }
1275
1276 additional_packages = [
1277 # bz#123456
1278 ('^(Workstation|Server)$', {
1279 '*': [
1280 'grub2',
1281 'kernel',
1282 ],
1283 }),
1284 ]
1285
1286 filter_packages = [
1287 # bz#111222
1288 ('^.*$', {
1289 '*': [
1290 'kernel-doc',
1291 ],
1292 }),
1293 ]
1294
1295 multilib = [
1296 ('^Server$', {
1297 'x86_64': ['devel', 'runtime']
1298 })
1299 ]
1300
1301 multilib_blacklist = {
1302 "*": [
1303 "gcc",
1304 ],
1305 }
1306
1307 multilib_whitelist = {
1308 "*": [
1309 "alsa-plugins-*",
1310 ],
1311 }
1312
1313 # gather_lookaside_repos = [
1314 # ('^.*$', {
1315 # '*': [
1316 # "https://dl.fedoraproject.org/pub/fedora/linux/releases/22/Everything/$basearch/os/",
1317 # ],
1318 # 'x86_64': [
1319 # "https://dl.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/",
1320 # ]
1321 # }),
1322 # ]
1323
1324 NOTE:
1325 It is a good practice to attach bug/ticket numbers to addi‐
1326 tional_packages, filter_packages, multilib_blacklist and multi‐
1327 lib_whitelist to track decisions.
1328
1329 Koji Settings
1330 Options
1331 koji_profile
1332 (str) – koji profile name. This tells Pungi how to communicate
1333 with your chosen Koji instance. See Koji’s documentation about
1334 profiles for more information about how to set up your Koji
1335 client profile. In the examples, the profile name is “koji”,
1336 which points to Fedora’s koji.fedoraproject.org.
1337
1338 global_runroot_method
1339 (str) – global runroot method to use. If runroot_method is set
1340 per Pungi phase using a dictionary, this option defines the de‐
1341 fault runroot method for phases not mentioned in the run‐
1342 root_method dictionary.
1343
1344 runroot_method
1345 (str*|*dict) – Runroot method to use. It can further specify the
1346 runroot method in case the runroot is set to True.
1347
1348 Available methods are:
1349
1350 • local – runroot tasks are run locally
1351
1352 • koji – runroot tasks are run in Koji
1353
1354 • openssh – runroot tasks are run on remote machine con‐
1355 nected using OpenSSH. The runroot_ssh_hostnames for
1356 each architecture must be set and the user under which
1357 Pungi runs must be configured to login as run‐
1358 root_ssh_username using the SSH key.
1359
1360 The runroot method can also be set per Pungi phase using the
1361 dictionary with phase name as key and runroot method as value.
1362 The default runroot method is in this case defined by the
1363 global_runroot_method option.
1364
1365 Example
1366 global_runroot_method = "koji"
1367 runroot_method = {
1368 "createiso": "local"
1369 }
1370
1371 runroot_channel
1372 (str) – name of koji channel
1373
1374 runroot_tag
1375 (str) – name of koji build tag used for runroot
1376
1377 runroot_weights
1378 (dict) – customize task weights for various runroot tasks. The
1379 values in the mapping should be integers, the keys can be se‐
1380 lected from the following list. By default no weight is assigned
1381 and Koji picks the default one according to policy.
1382
1383 • buildinstall
1384
1385 • createiso
1386
1387 • ostree
1388
1389 • ostree_installer
1390
1391 Example
1392 koji_profile = "koji"
1393 runroot_channel = "runroot"
1394 runroot_tag = "f23-build"
1395
1396 Runroot “openssh” method settings
1397 Options
1398 runroot_ssh_username
1399 (str) – For openssh runroot method, configures the username used
1400 to login the remote machine to run the runroot task. Defaults to
1401 “root”.
1402
1403 runroot_ssh_hostnames
1404 (dict) – For openssh runroot method, defines the hostname for
1405 each architecture on which the runroot task should be running.
1406 Format: {"x86_64": "runroot-x86-64.localhost.tld", ...}
1407
1408 runroot_ssh_init_template
1409 (str) [optional] – For openssh runroot method, defines the com‐
1410 mand to initializes the runroot task on the remote machine. This
1411 command is executed as first command for each runroot task exe‐
1412 cuted.
1413
1414 The command can print a string which is then available as {run‐
1415 root_key} for other SSH commands. This string might be used to
1416 keep the context across different SSH commands executed for sin‐
1417 gle runroot task.
1418
1419 The goal of this command is setting up the environment for real
1420 runroot commands. For example preparing the unique mock environ‐
1421 ment, mounting the desired file-systems, …
1422
1423 The command string can contain following variables which are re‐
1424 placed by the real values before executing the init command:
1425
1426 • {runroot_tag} - Tag to initialize the runroot environment
1427 from.
1428
1429 When not set, no init command is executed.
1430
1431 runroot_ssh_install_packages_template
1432 (str) [optional] – For openssh runroot method, defines the tem‐
1433 plate for command to install the packages requested to run the
1434 runroot task.
1435
1436 The template string can contain following variables which are
1437 replaced by the real values before executing the install com‐
1438 mand:
1439
1440 • {runroot_key} - Replaced with the string returned by run‐
1441 root_ssh_init_template if used. This can be used to keep the
1442 track of context of SSH commands belonging to single runroot
1443 task.
1444
1445 • {packages} - White-list separated list of packages to install.
1446
1447 Example (The {runroot_key} is expected to be set to mock config
1448 file using the runroot_ssh_init_template command.): "mock -r
1449 {runroot_key} --install {packages}"
1450
1451 When not set, no command to install packages on remote machine
1452 is executed.
1453
1454 runroot_ssh_run_template
1455 (str) [optional] – For openssh runroot method, defines the tem‐
1456 plate for the main runroot command.
1457
1458 The template string can contain following variables which are
1459 replaced by the real values before executing the install com‐
1460 mand:
1461
1462 • {runroot_key} - Replaced with the string returned by run‐
1463 root_ssh_init_template if used. This can be used to keep the
1464 track of context of SSH commands belonging to single runroot
1465 task.
1466
1467 • {command} - Command to run.
1468
1469 Example (The {runroot_key} is expected to be set to mock config
1470 file using the runroot_ssh_init_template command.): "mock -r
1471 {runroot_key} chroot -- {command}"
1472
1473 When not set, the runroot command is run directly.
1474
1475 Extra Files Settings
1476 Options
1477 extra_files
1478 (list) – references to external files to be placed in os/ direc‐
1479 tory and media; format: [(variant_uid_regex, {arch|*:
1480 [scm_dict]})]. See Exporting files from SCM for details. If the
1481 dict specifies a target key, an additional subdirectory will be
1482 used.
1483
1484 Example
1485 extra_files = [
1486 ('^.*$', {
1487 '*': [
1488 # GPG keys
1489 {
1490 "scm": "rpm",
1491 "repo": "fedora-repos",
1492 "branch": None,
1493 "file": [
1494 "/etc/pki/rpm-gpg/RPM-GPG-KEY-22-fedora",
1495 ],
1496 "target": "",
1497 },
1498 # GPL
1499 {
1500 "scm": "git",
1501 "repo": "https://pagure.io/pungi-fedora",
1502 "branch": None,
1503 "file": [
1504 "GPL",
1505 ],
1506 "target": "",
1507 },
1508 ],
1509 }),
1510 ]
1511
1512 Extra Files Metadata
1513 If extra files are specified a metadata file, extra_files.json, is
1514 placed in the os/ directory and media. The checksums generated are de‐
1515 termined by media_checksums option. This metadata file is in the for‐
1516 mat:
1517
1518 {
1519 "header": {"version": "1.0},
1520 "data": [
1521 {
1522 "file": "GPL",
1523 "checksums": {
1524 "sha256": "8177f97513213526df2cf6184d8ff986c675afb514d4e68a404010521b880643"
1525 },
1526 "size": 18092
1527 },
1528 {
1529 "file": "release-notes/notes.html",
1530 "checksums": {
1531 "sha256": "82b1ba8db522aadf101dca6404235fba179e559b95ea24ff39ee1e5d9a53bdcb"
1532 },
1533 "size": 1120
1534 }
1535 ]
1536 }
1537
1538 CreateISO Settings
1539 Options
1540 createiso_skip = False
1541 (list) – mapping that defines which variants and arches to skip
1542 during createiso; format: [(variant_uid_regex, {arch|*: True})]
1543
1544 createiso_max_size
1545 (list) – mapping that defines maximum expected size for each
1546 variant and arch. If the ISO is larger than the limit, a warning
1547 will be issued.
1548
1549 Format: [(variant_uid_regex, {arch|*: number})]
1550
1551 createiso_max_size_is_strict
1552 (list) – Set the value to True to turn the warning from cre‐
1553 ateiso_max_size into a hard error that will abort the compose.
1554 If there are multiple matches in the mapping, the check will be
1555 strict if at least one match says so.
1556
1557 Format: [(variant_uid_regex, {arch|*: bool})]
1558
1559 create_jigdo = False
1560 (bool) – controls the creation of jigdo from ISO
1561
1562 create_optional_isos = False
1563 (bool) – when set to True, ISOs will be created even for op‐
1564 tional variants. By default only variants with type variant or
1565 layered-product will get ISOs.
1566
1567 createiso_break_hardlinks = False
1568 (bool) – when set to True, all files that should go on the ISO
1569 and have a hardlink will be first copied into a staging direc‐
1570 tory. This should work around a bug in genisoimage including in‐
1571 correct link count in the image, but it is at the cost of having
1572 to copy a potentially significant amount of data.
1573
1574 The staging directory is deleted when ISO is successfully cre‐
1575 ated. In that case the same task to create the ISO will not be
1576 re-runnable.
1577
1578 createiso_use_xorrisofs = False
1579 (bool) – when set to True, use xorrisofs for creating ISOs in‐
1580 stead of genisoimage.
1581
1582 iso_size = 4700000000
1583 (int|str) – size of ISO image. The value should either be an in‐
1584 teger meaning size in bytes, or it can be a string with k, M, G
1585 suffix (using multiples of 1024).
1586
1587 iso_level
1588 (int|list) [optional] – Set the ISO9660 conformance level. This
1589 is either a global single value (a number from 1 to 4), or a
1590 variant/arch mapping.
1591
1592 split_iso_reserve = 10MiB
1593 (int|str) – how much free space should be left on each disk. The
1594 format is the same as for iso_size option.
1595
1596 iso_hfs_ppc64le_compatible = True
1597 (bool) – when set to False, the Apple/HFS compatibility is
1598 turned off for ppc64le ISOs. This option only makes sense for
1599 bootable products, and affects images produced in createiso and
1600 extra_isos phases.
1601
1602 NOTE:
1603 Source architecture needs to be listed explicitly. Excluding ‘*’
1604 applies only on binary arches. Jigdo causes significant increase of
1605 time to ISO creation.
1606
1607 Example
1608 createiso_skip = [
1609 ('^Workstation$', {
1610 '*': True,
1611 'src': True
1612 }),
1613 ]
1614
1615 Automatic generation of version and release
1616 Version and release values for certain artifacts can be generated auto‐
1617 matically based on release version, compose label, date, type and
1618 respin. This can be used to shorten the config and keep it the same for
1619 multiple uses.
1620
1621┌───────────────────────┬───────────────┬──────────┬──────────┬────────┬──────────────┐
1622│Compose ID │ Label │ Version │ Date │ Respin │ Release │
1623├───────────────────────┼───────────────┼──────────┼──────────┼────────┼──────────────┤
1624│F-Rawhide-20170406.n.0 │ - │ Rawhide │ 20170406 │ 0 │ 20170406.n.0 │
1625├───────────────────────┼───────────────┼──────────┼──────────┼────────┼──────────────┤
1626│F-26-20170329.1 │ Alpha-1.6 │ 26_Alpha │ 20170329 │ 1 │ 1.6 │
1627├───────────────────────┼───────────────┼──────────┼──────────┼────────┼──────────────┤
1628│F-Atomic-25-20170407.0 │ RC-20170407.0 │ 25 │ 20170407 │ 0 │ 20170407.0 │
1629├───────────────────────┼───────────────┼──────────┼──────────┼────────┼──────────────┤
1630│F-Atomic-25-20170407.0 │ - │ 25 │ 20170407 │ 0 │ 20170407.0 │
1631└───────────────────────┴───────────────┴──────────┴──────────┴────────┴──────────────┘
1632
1633 All non-RC milestones from label get appended to the version. For re‐
1634 lease either label is used or date, type and respin.
1635
1636 Common options for Live Images, Live Media and Image Build
1637 All images can have ksurl, version, release and target specified. Since
1638 this can create a lot of duplication, there are global options that can
1639 be used instead.
1640
1641 For each of the phases, if the option is not specified for a particular
1642 deliverable, an option named <PHASE_NAME>_<OPTION> is checked. If that
1643 is not specified either, the last fallback is global_<OPTION>. If even
1644 that is unset, the value is considered to not be specified.
1645
1646 The kickstart URL is configured by these options.
1647
1648 • global_ksurl – global fallback setting
1649
1650 • live_media_ksurl
1651
1652 • image_build_ksurl
1653
1654 • live_images_ksurl
1655
1656 Target is specified by these settings.
1657
1658 • global_target – global fallback setting
1659
1660 • live_media_target
1661
1662 • image_build_target
1663
1664 • live_images_target
1665
1666 • osbuild_target
1667
1668 Version is specified by these options. If no version is set, a default
1669 value will be provided according to automatic versioning.
1670
1671 • global_version – global fallback setting
1672
1673 • live_media_version
1674
1675 • image_build_version
1676
1677 • live_images_version
1678
1679 • osbuild_version
1680
1681 Release is specified by these options. If set to a magic value to !RE‐
1682 LEASE_FROM_LABEL_DATE_TYPE_RESPIN, a value will be generated according
1683 to automatic versioning.
1684
1685 • global_release – global fallback setting
1686
1687 • live_media_release
1688
1689 • image_build_release
1690
1691 • live_images_release
1692
1693 • osbuild_release
1694
1695 Each configuration block can also optionally specify a failable key.
1696 For live images it should have a boolean value. For live media and im‐
1697 age build it should be a list of strings containing architectures that
1698 are optional. If any deliverable fails on an optional architecture, it
1699 will not abort the whole compose. If the list contains only "*", all
1700 arches will be substituted.
1701
1702 Live Images Settings
1703 live_images
1704 (list) – Configuration for the particular image. The elements of
1705 the list should be tuples (variant_uid_regex, {arch|*: config}).
1706 The config should be a dict with these keys:
1707
1708 • kickstart (str)
1709
1710 • ksurl (str) [optional] – where to get the kickstart from
1711
1712 • name (str)
1713
1714 • version (str)
1715
1716 • target (str)
1717
1718 • repo (str|[str]) – repos specified by URL or variant UID
1719
1720 • specfile (str) – for images wrapped in RPM
1721
1722 • scratch (bool) – only RPM-wrapped images can use scratch
1723 builds, but by default this is turned off
1724
1725 • type (str) – what kind of task to start in Koji. Defaults
1726 to live meaning koji spin-livecd will be used. Alternative
1727 option is appliance corresponding to koji spin-appliance.
1728
1729 • sign (bool) – only RPM-wrapped images can be signed
1730
1731 live_images_no_rename
1732 (bool) – When set to True, filenames generated by Koji will be
1733 used. When False, filenames will be generated based on im‐
1734 age_name_format configuration option.
1735
1736 Live Media Settings
1737 live_media
1738 (dict) – configuration for koji spin-livemedia; format: {vari‐
1739 ant_uid_regex: [{opt:value}]}
1740
1741 Required options:
1742
1743 • name (str)
1744
1745 • version (str)
1746
1747 • arches ([str]) – what architectures to build the media for;
1748 by default uses all arches for the variant.
1749
1750 • kickstart (str) – name of the kickstart file
1751
1752 Available options:
1753
1754 • ksurl (str)
1755
1756 • ksversion (str)
1757
1758 • scratch (bool)
1759
1760 • target (str)
1761
1762 • release (str) – a string with the release, or !RE‐
1763 LEASE_FROM_LABEL_DATE_TYPE_RESPIN to automatically generate
1764 a suitable value. See automatic versioning for details.
1765
1766 • skip_tag (bool)
1767
1768 • repo (str|[str]) – repos specified by URL or variant UID
1769
1770 • title (str)
1771
1772 • install_tree_from (str) – variant to take install tree from
1773
1774 • nomacboot (bool)
1775
1776 Image Build Settings
1777 image_build
1778 (dict) – config for koji image-build; format: {vari‐
1779 ant_uid_regex: [{opt: value}]}
1780
1781 By default, images will be built for each binary arch valid for
1782 the variant. The config can specify a list of arches to narrow
1783 this down.
1784
1785 NOTE:
1786 Config can contain anything what is accepted by koji image-build
1787 --config configfile.ini
1788
1789 Repo can be specified either as a string or a list of strings. It
1790 will be automatically transformed into format suitable for koji. A
1791 repo for the currently built variant will be added as well.
1792
1793 If you explicitly set release to !RELEASE_FROM_LA‐
1794 BEL_DATE_TYPE_RESPIN, it will be replaced with a value generated as
1795 described in automatic versioning.
1796
1797 If you explicitly set release to !RELEASE_FROM_DATE_RESPIN, it will
1798 be replaced with a value generated as described in automatic ver‐
1799 sioning.
1800
1801 If you explicitly set version to !VERSION_FROM_VERSION, it will be
1802 replaced with a value generated as described in automatic version‐
1803 ing.
1804
1805 Please don’t set install_tree. This gets automatically set by pungi
1806 based on current variant. You can use install_tree_from key to use
1807 install tree from another variant.
1808
1809 Both the install tree and repos can use one of following formats:
1810
1811 • URL to the location
1812
1813 • name of variant in the current compose
1814
1815 • absolute path on local filesystem (which will be translated
1816 using configured mappings or used unchanged, in which case you
1817 have to ensure the koji builders can access it)
1818
1819 You can set either a single format, or a list of formats. For avail‐
1820 able values see help output for koji image-build command.
1821
1822 If ksurl ends with #HEAD, Pungi will figure out the SHA1 hash of
1823 current HEAD and use that instead.
1824
1825 Setting scratch to True will run the koji tasks as scratch builds.
1826
1827 Example
1828 image_build = {
1829 '^Server$': [
1830 {
1831 'image-build': {
1832 'format': ['docker', 'qcow2']
1833 'name': 'fedora-qcow-and-docker-base',
1834 'target': 'koji-target-name',
1835 'ksversion': 'F23', # value from pykickstart
1836 'version': '23',
1837 # correct SHA1 hash will be put into the URL below automatically
1838 'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
1839 'kickstart': "fedora-docker-base.ks",
1840 'repo': ["http://someextrarepos.org/repo", "ftp://rekcod.oi/repo"],
1841 'distro': 'Fedora-20',
1842 'disk_size': 3,
1843
1844 # this is set automatically by pungi to os_dir for given variant
1845 # 'install_tree': 'http://somepath',
1846 },
1847 'factory-parameters': {
1848 'docker_cmd': "[ '/bin/bash' ]",
1849 'docker_env': "[ 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' ]",
1850 'docker_labels': "{'Name': 'fedora-docker-base', 'License': u'GPLv2', 'RUN': 'docker run -it --rm ${OPT1} --privileged -v \`pwd\`:/atomicapp -v /run:/run -v /:/host --net=host --name ${NAME} -e NAME=${NAME} -e IMAGE=${IMAGE} ${IMAGE} -v ${OPT2} run ${OPT3} /atomicapp', 'Vendor': 'Fedora Project', 'Version': '23', 'Architecture': 'x86_64' }",
1851 }
1852 },
1853 {
1854 'image-build': {
1855 'format': ['docker', 'qcow2']
1856 'name': 'fedora-qcow-and-docker-base',
1857 'target': 'koji-target-name',
1858 'ksversion': 'F23', # value from pykickstart
1859 'version': '23',
1860 # correct SHA1 hash will be put into the URL below automatically
1861 'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
1862 'kickstart': "fedora-docker-base.ks",
1863 'repo': ["http://someextrarepos.org/repo", "ftp://rekcod.oi/repo"],
1864 'distro': 'Fedora-20',
1865 'disk_size': 3,
1866
1867 # this is set automatically by pungi to os_dir for given variant
1868 # 'install_tree': 'http://somepath',
1869 }
1870 },
1871 {
1872 'image-build': {
1873 'format': 'qcow2',
1874 'name': 'fedora-qcow-base',
1875 'target': 'koji-target-name',
1876 'ksversion': 'F23', # value from pykickstart
1877 'version': '23',
1878 'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
1879 'kickstart': "fedora-docker-base.ks",
1880 'distro': 'Fedora-23',
1881
1882 # only build this type of image on x86_64
1883 'arches': ['x86_64']
1884
1885 # Use install tree and repo from Everything variant.
1886 'install_tree_from': 'Everything',
1887 'repo': ['Everything'],
1888
1889 # Set release automatically.
1890 'release': '!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN',
1891 }
1892 }
1893 ]
1894 }
1895
1896 OSBuild Composer for building images
1897 osbuild
1898 (dict) – configuration for building images in OSBuild Composer
1899 service fronted by a Koji plugin. Pungi will trigger a Koji task
1900 delegating to the OSBuild Composer, which will build the image,
1901 import it to Koji via content generators.
1902
1903 Format: {variant_uid_regex: [{...}]}.
1904
1905 Required keys in the configuration dict:
1906
1907 • name – name of the Koji package
1908
1909 • distro – image for which distribution should be build TODO ex‐
1910 amples
1911
1912 • image_types – a list with a single image type string or just a
1913 string representing the image type to build (e.g. qcow2). In
1914 any case, only a single image type can be provided as an argu‐
1915 ment.
1916
1917 Optional keys:
1918
1919 • target – which build target to use for the task. Either this
1920 option or the global osbuild_target is required.
1921
1922 • version – version for the final build (as a string). This op‐
1923 tion is required if the global osbuild_version is not speci‐
1924 fied.
1925
1926 • release – release part of the final NVR. If neither this op‐
1927 tion nor the global osbuild_release is set, Koji will automat‐
1928 ically generate a value.
1929
1930 • repo – a list of repositories from which to consume packages
1931 for building the image. By default only the variant repository
1932 is used. The list items may use one of the following formats:
1933
1934 • String with just the repository URL.
1935
1936 • Dictionary with the following keys:
1937
1938 • baseurl – URL of the repository.
1939
1940 •
1941
1942 package_sets – a list of package set names to use for this
1943 repository. Package sets are an internal concept of
1944 Image Builder and are used in image definitions. If
1945 specified, the repository is used by Image Builder
1946 only for the pipeline with the same name. For ex‐
1947 ample, specifying the build package set name will
1948 make the repository to be used only for the build
1949 environment in which the image will be built. (op‐
1950 tional)
1951
1952 • arches – list of architectures for which to build the image.
1953 By default, the variant arches are used. This option can only
1954 restrict it, not add a new one.
1955
1956 • manifest_type – the image type that is put into the manifest
1957 by pungi. If not supplied then it is autodetected from the
1958 Koji output.
1959
1960 • ostree_url – URL of the repository that’s used to fetch the
1961 parent commit from.
1962
1963 • ostree_ref – name of the ostree branch
1964
1965 • ostree_parent – commit hash or a a branch-like reference to
1966 the parent commit.
1967
1968 • upload_options – a dictionary with upload options specific to
1969 the target cloud environment. If provided, the image will be
1970 uploaded to the cloud environment, in addition to the Koji
1971 server. One can’t combine arbitrary image types with arbitrary
1972 upload options. The dictionary keys differ based on the tar‐
1973 get cloud environment. The following keys are supported:
1974
1975 • AWS EC2 upload options – upload to Amazon Web Services.
1976
1977 • region – AWS region to upload the image to
1978
1979 • share_with_accounts – list of AWS account IDs to share the
1980 image with
1981
1982 • snapshot_name – Snapshot name of the uploaded EC2 image
1983 (optional)
1984
1985 • AWS S3 upload options – upload to Amazon Web Services S3.
1986
1987 • region – AWS region to upload the image to
1988
1989 • Azure upload options – upload to Microsoft Azure.
1990
1991 • tenant_id – Azure tenant ID to upload the image to
1992
1993 • subscription_id – Azure subscription ID to upload the im‐
1994 age to
1995
1996 • resource_group – Azure resource group to upload the image
1997 to
1998
1999 • location – Azure location of the resource group (optional)
2000
2001 • image_name – Image name of the uploaded Azure image (op‐
2002 tional)
2003
2004 • GCP upload options – upload to Google Cloud Platform.
2005
2006 • region – GCP region to upload the image to
2007
2008 • bucket – GCP bucket to upload the image to (optional)
2009
2010 • share_with_accounts – list of GCP accounts to share the
2011 image with
2012
2013 • image_name – Image name of the uploaded GCP image (op‐
2014 tional)
2015
2016 • Container upload options – upload to a container registry.
2017
2018 • name – name of the container image (optional)
2019
2020 • tag – container tag to upload the image to (optional)
2021
2022 NOTE:
2023 There is initial support for having this task as failable without
2024 aborting the whole compose. This can be enabled by setting "fail‐
2025 able": ["*"] in the config for the image. It is an on/off switch
2026 without granularity per arch.
2027
2028 Image container
2029 This phase supports building containers in OSBS that embed an image
2030 created in the same compose. This can be useful for delivering the im‐
2031 age to users running in containerized environments.
2032
2033 Pungi will start a buildContainer task in Koji with configured source
2034 repository. The Dockerfile can expect that a repo file will be injected
2035 into the container that defines a repo named image-to-include, and its
2036 baseurl will point to the image to include. It is possible to extract
2037 the URL with a command like dnf config-manager --dump image-to-include
2038 | awk '/baseurl =/{print $3}'`
2039
2040 image_container
2041 (dict) – configuration for building containers embedding an im‐
2042 age.
2043
2044 Format: {variant_uid_regex: [{...}]}.
2045
2046 The inner object will define a single container. These keys are
2047 required:
2048
2049 • url, target, git_branch. See OSBS section for definition of
2050 these.
2051
2052 • image_spec – (object) A string mapping of filters used to se‐
2053 lect the image to embed. All images listed in metadata for the
2054 variant will be processed. The keys of this filter are used to
2055 select metadata fields for the image, and values are regular
2056 expression that need to match the metadata value.
2057
2058 The filter should match exactly one image.
2059
2060 Example config
2061 image_container = {
2062 "^Server$": [{
2063 "url": "git://example.com/dockerfiles.git?#HEAD",
2064 "target": "f24-container-candidate",
2065 "git_branch": "f24",
2066 "image_spec": {
2067 "format": "qcow2",
2068 "arch": "x86_64",
2069 "path": ".*/guest-image-.*$",
2070 }
2071 }]
2072 }
2073
2074 OSTree Settings
2075 The ostree phase of Pungi can create and update ostree repositories.
2076 This is done by running rpm-ostree compose in a Koji runroot environ‐
2077 ment. The ostree repository itself is not part of the compose and
2078 should be located in another directory. Any new packages in the compose
2079 will be added to the repository with a new commit.
2080
2081 ostree (dict) – a mapping of configuration for each. The format should
2082 be {variant_uid_regex: config_dict}. It is possible to use a
2083 list of configuration dicts as well.
2084
2085 The configuration dict for each variant arch pair must have
2086 these keys:
2087
2088 • treefile – (str) Filename of configuration for rpm-ostree.
2089
2090 • config_url – (str) URL for Git repository with the treefile.
2091
2092 • repo – (str|dict|[str|dict]) repos specified by URL or variant
2093 UID or a dict of repo options, baseurl is required in the
2094 dict.
2095
2096 • ostree_repo – (str) Where to put the ostree repository
2097
2098 These keys are optional:
2099
2100 • keep_original_sources – (bool) Keep the existing source repos
2101 in the tree config file. If not enabled, all the original
2102 source repos will be removed from the tree config file.
2103
2104 • config_branch – (str) Git branch of the repo to use. Defaults
2105 to master.
2106
2107 • arches – ([str]) List of architectures for which to update os‐
2108 tree. There will be one task per architecture. By default all
2109 architectures in the variant are used.
2110
2111 • failable – ([str]) List of architectures for which this deliv‐
2112 erable is not release blocking.
2113
2114 • update_summary – (bool) Update summary metadata after tree
2115 composing. Defaults to False.
2116
2117 • force_new_commit – (bool) Do not use rpm-ostree’s built-in
2118 change detection. Defaults to False.
2119
2120 • unified_core – (bool) Use rpm-ostree in unified core mode for
2121 composes. Defaults to False.
2122
2123 • version – (str) Version string to be added as versioning meta‐
2124 data. If this option is set to !OSTREE_VERSION_FROM_LA‐
2125 BEL_DATE_TYPE_RESPIN, a value will be generated automatically
2126 as $VERSION.$RELEASE. If this option is set to !VER‐
2127 SION_FROM_VERSION_DATE_RESPIN, a value will be generated auto‐
2128 matically as $VERSION.$DATE.$RESPIN. See how those values are
2129 created.
2130
2131 • tag_ref – (bool, default True) If set to False, a git refer‐
2132 ence will not be created.
2133
2134 • ostree_ref – (str) To override value ref from treefile.
2135
2136 • runroot_packages – (list) A list of additional package names
2137 to be installed in the runroot environment in Koji.
2138
2139 Example config
2140 ostree = {
2141 "^Atomic$": {
2142 "treefile": "fedora-atomic-docker-host.json",
2143 "config_url": "https://git.fedorahosted.org/git/fedora-atomic.git",
2144 "repo": [
2145 "Server",
2146 "http://example.com/repo/x86_64/os",
2147 {"baseurl": "Everything"},
2148 {"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
2149 ],
2150 "keep_original_sources": True,
2151 "ostree_repo": "/mnt/koji/compose/atomic/Rawhide/",
2152 "update_summary": True,
2153 # Automatically generate a reasonable version
2154 "version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
2155 # Only run this for x86_64 even if Atomic has more arches
2156 "arches": ["x86_64"],
2157 }
2158 }
2159
2160 ostree_use_koji_plugin = False
2161 (bool) – When set to True, the Koji pungi_ostree task will be
2162 used to execute rpm-ostree instead of runroot. Use only if the
2163 Koji instance has the pungi_ostree plugin installed.
2164
2165 Ostree Installer Settings
2166 The ostree_installer phase of Pungi can produce installer image
2167 bundling an OSTree repository. This always runs in Koji as a runroot
2168 task.
2169
2170 ostree_installer
2171 (dict) – a variant/arch mapping of configuration. The format
2172 should be [(variant_uid_regex, {arch|*: config_dict})].
2173
2174 The configuration dict for each variant arch pair must have this
2175 key:
2176
2177 These keys are optional:
2178
2179 • repo – (str|[str]) repos specified by URL or variant UID
2180
2181 • release – (str) Release value to set for the installer image.
2182 Set to !RELEASE_FROM_LABEL_DATE_TYPE_RESPIN to generate the
2183 value automatically.
2184
2185 • failable – ([str]) List of architectures for which this deliv‐
2186 erable is not release blocking.
2187
2188 These optional keys are passed to lorax to customize the build.
2189
2190 • installpkgs – ([str])
2191
2192 • add_template – ([str])
2193
2194 • add_arch_template – ([str])
2195
2196 • add_template_var – ([str])
2197
2198 • add_arch_template_var – ([str])
2199
2200 • rootfs_size – ([str])
2201
2202 • template_repo – (str) Git repository with extra templates.
2203
2204 • template_branch – (str) Branch to use from template_repo.
2205
2206 The templates can either be absolute paths, in which case they
2207 will be used as configured; or they can be relative paths, in
2208 which case template_repo needs to point to a Git repository from
2209 which to take the templates.
2210
2211 If the templates need to run with additional dependencies, that
2212 can be configured with the optional key:
2213
2214 • extra_runroot_pkgs – ([str])
2215
2216 • skip_branding – (bool) Stops lorax to install packages with
2217 branding. Defaults to False.
2218
2219 ostree_installer_overwrite = False
2220 (bool) – by default if a variant including OSTree installer also
2221 creates regular installer images in buildinstall phase, there
2222 will be conflicts (as the files are put in the same place) and
2223 Pungi will report an error and fail the compose.
2224
2225 With this option it is possible to opt-in for the overwriting.
2226 The traditional boot.iso will be in the iso/ subdirectory.
2227
2228 ostree_installer_use_koji_plugin = False
2229 (bool) – When set to True, the Koji pungi_buildinstall task will
2230 be used to execute Lorax instead of runroot. Use only if the
2231 Koji instance has the pungi_buildinstall plugin installed.
2232
2233 Example config
2234 ostree_installer = [
2235 ("^Atomic$", {
2236 "x86_64": {
2237 "repo": [
2238 "Everything",
2239 "https://example.com/extra-repo1.repo",
2240 "https://example.com/extra-repo2.repo",
2241 ],
2242 "release": "!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN",
2243 "installpkgs": ["fedora-productimg-atomic"],
2244 "add_template": ["atomic-installer/lorax-configure-repo.tmpl"],
2245 "add_template_var": [
2246 "ostree_osname=fedora-atomic",
2247 "ostree_ref=fedora-atomic/Rawhide/x86_64/docker-host",
2248 ],
2249 "add_arch_template": ["atomic-installer/lorax-embed-repo.tmpl"],
2250 "add_arch_template_var": [
2251 "ostree_repo=https://kojipkgs.fedoraproject.org/compose/atomic/Rawhide/",
2252 "ostree_osname=fedora-atomic",
2253 "ostree_ref=fedora-atomic/Rawhide/x86_64/docker-host",
2254 ]
2255 'template_repo': 'https://git.fedorahosted.org/git/spin-kickstarts.git',
2256 'template_branch': 'f24',
2257 }
2258 })
2259 ]
2260
2261 OSBS Settings
2262 Pungi can build container images in OSBS. The build is initiated
2263 through Koji container-build plugin. The base image will be using RPMs
2264 from the current compose and a Dockerfile from specified Git reposi‐
2265 tory.
2266
2267 Please note that the image is uploaded to a registry and not exported
2268 into compose directory. There will be a metadata file in compose/meta‐
2269 data/osbs.json with details about the built images (assuming they are
2270 not scratch builds).
2271
2272 osbs (dict) – a mapping from variant regexes to configuration blocks.
2273 The format should be {variant_uid_regex: [config_dict]}.
2274
2275 The configuration for each image must have at least these keys:
2276
2277 • url – (str) URL pointing to a Git repository with Dockerfile.
2278 Please see Git URLs section for more details.
2279
2280 • target – (str) A Koji target to build the image for.
2281
2282 • git_branch – (str) A branch in SCM for the Dockerfile. This is
2283 required by OSBS to avoid race conditions when multiple builds
2284 from the same repo are submitted at the same time. Please note
2285 that url should contain the branch or tag name as well, so
2286 that it can be resolved to a particular commit hash.
2287
2288 Optionally you can specify failable. If it has a truthy value,
2289 failure to create the image will not abort the whole compose.
2290
2291 The configuration will pass other attributes directly to the
2292 Koji task. This includes scratch and priority. See koji
2293 list-api buildContainer for more details about these options.
2294
2295 A value for yum_repourls will be created automatically and point
2296 at a repository in the current compose. You can add extra repos‐
2297 itories with repo key having a list of urls pointing to .repo
2298 files or just variant uid, Pungi will create the .repo file for
2299 that variant. If specific URL is used in the repo, the $COM‐
2300 POSE_ID variable in the repo string will be replaced with the
2301 real compose ID. gpgkey can be specified to enable gpgcheck in
2302 repo files for variants.
2303
2304 osbs_registries
2305 (dict) – Use this optional setting to emit osbs-request-push
2306 messages for each non-scratch container build. These messages
2307 can guide other tools how to push the images to other reg‐
2308 istries. For example, an external tool might trigger on these
2309 messages and copy the images from OSBS’s registry to a staging
2310 or production registry.
2311
2312 For each completed container build, Pungi will try to match the
2313 NVR against a key in osbs_registries mapping (using shell-style
2314 globbing) and take the corresponding value and collect them
2315 across all built images. Pungi will save this data into
2316 logs/global/osbs-registries.json, mapping each Koji NVR to the
2317 registry data. Pungi will also send this data to the message bus
2318 on the osbs-request-push topic once the compose finishes suc‐
2319 cessfully.
2320
2321 Pungi simply logs the mapped data and emits the messages. It
2322 does not handle the messages or push images. A separate tool
2323 must do that.
2324
2325 Example config
2326 osbs = {
2327 "^Server$": {
2328 # required
2329 "url": "git://example.com/dockerfiles.git?#HEAD",
2330 "target": "f24-docker-candidate",
2331 "git_branch": "f24-docker",
2332
2333 # optional
2334 "repo": ["Everything", "https://example.com/extra-repo.repo"],
2335 # This will result in three repo urls being passed to the task.
2336 # They will be in this order: Server, Everything, example.com/
2337 "gpgkey": 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release',
2338 }
2339 }
2340
2341 Extra ISOs
2342 Create an ISO image that contains packages from multiple variants. Such
2343 ISO always belongs to one variant, and will be stored in ISO directory
2344 of that variant.
2345
2346 The ISO will be bootable if buildinstall phase runs for the parent
2347 variant. It will reuse boot configuration from that variant.
2348
2349 extra_isos
2350 (dict) – a mapping from variant UID regex to a list of configu‐
2351 ration blocks.
2352
2353 • include_variants – (list) list of variant UIDs from which con‐
2354 tent should be added to the ISO; the variant of this image is
2355 added automatically.
2356
2357 Rest of configuration keys is optional.
2358
2359 • filename – (str) template for naming the image. In addition to
2360 the regular placeholders filename is available with the name
2361 generated using image_name_format option.
2362
2363 • volid – (str) template for generating volume ID. Again volid
2364 placeholder can be used similarly as for file name. This can
2365 also be a list of templates that will be tried sequentially
2366 until one generates a volume ID that fits into 32 character
2367 limit.
2368
2369 • extra_files – (list) a list of scm_dict objects. These files
2370 will be put in the top level directory of the image.
2371
2372 • arches – (list) a list of architectures for which to build
2373 this image. By default all arches from the variant will be
2374 used. This option can be used to limit them.
2375
2376 • failable_arches – (list) a list of architectures for which the
2377 image can fail to be generated and not fail the entire com‐
2378 pose.
2379
2380 • skip_src – (bool) allows to disable creating an image with
2381 source packages.
2382
2383 • inherit_extra_files – (bool) by default extra files in vari‐
2384 ants are ignored. If you want to include them in the ISO, set
2385 this option to True.
2386
2387 • max_size – (int) expected maximum size in bytes. If the final
2388 image is larger, a warning will be issued.
2389
2390 Example config
2391 extra_isos = {
2392 'Server': [{
2393 # Will generate foo-DP-1.0-20180510.t.43-Server-x86_64-dvd1.iso
2394 'filename': 'foo-{filename}',
2395 'volid': 'foo-{arch}',
2396
2397 'extra_files': [{
2398 'scm': 'git',
2399 'repo': 'https://pagure.io/pungi.git',
2400 'file': 'setup.py'
2401 }],
2402
2403 'include_variants': ['Client']
2404 }]
2405 }
2406 # This should create image with the following layout:
2407 # .
2408 # ├── Client
2409 # │ ├── Packages
2410 # │ │ ├── a
2411 # │ │ └── b
2412 # │ └── repodata
2413 # ├── Server
2414 # │ ├── Packages
2415 # │ │ ├── a
2416 # │ │ └── b
2417 # │ └── repodata
2418 # └── setup.py
2419
2420 Media Checksums Settings
2421 media_checksums
2422 (list) – list of checksum types to compute, allowed values are
2423 anything supported by Python’s hashlib module (see documentation
2424 for details).
2425
2426 media_checksum_one_file
2427 (bool) – when True, only one CHECKSUM file will be created per
2428 directory; this option requires media_checksums to only specify
2429 one type
2430
2431 media_checksum_base_filename
2432 (str) – when not set, all checksums will be save to a file named
2433 either CHECKSUM or based on the digest type; this option allows
2434 adding any prefix to that name
2435
2436 It is possible to use format strings that will be replace by ac‐
2437 tual values. The allowed keys are:
2438
2439 • arch
2440
2441 • compose_id
2442
2443 • date
2444
2445 • label
2446
2447 • label_major_version
2448
2449 • release_short
2450
2451 • respin
2452
2453 • type
2454
2455 • type_suffix
2456
2457 • version
2458
2459 • dirname (only if media_checksum_one_file is enabled)
2460
2461 For example, for Fedora the prefix should be %(re‐
2462 lease_short)s-%(variant)s-%(version)s-%(date)s%(type_suf‐
2463 fix)s.%(respin)s.
2464
2465 Translate Paths Settings
2466 translate_paths
2467 (list) – list of paths to translate; format: [(path, trans‐
2468 lated_path)]
2469
2470 NOTE:
2471 This feature becomes useful when you need to transform compose loca‐
2472 tion into e.g. a HTTP repo which is can be passed to koji im‐
2473 age-build. The path part is normalized via os.path.normpath().
2474
2475 Example config
2476 translate_paths = [
2477 ("/mnt/a", "http://b/dir"),
2478 ]
2479
2480 Example usage
2481 >>> from pungi.util import translate_paths
2482 >>> print translate_paths(compose_object_with_mapping, "/mnt/a/c/somefile")
2483 http://b/dir/c/somefile
2484
2485 Miscellaneous Settings
2486 paths_module
2487 (str) – Name of Python module implementing the same interface as
2488 pungi.paths. This module can be used to override where things
2489 are placed.
2490
2491 link_type = hardlink-or-copy
2492 (str) – Method of putting packages into compose directory.
2493
2494 Available options:
2495
2496 • hardlink-or-copy
2497
2498 • hardlink
2499
2500 • copy
2501
2502 • symlink
2503
2504 • abspath-symlink
2505
2506 skip_phases
2507 (list) – List of phase names that should be skipped. The same
2508 functionality is available via a command line option.
2509
2510 release_discinfo_description
2511 (str) – Override description in .discinfo files. The value is a
2512 format string accepting %(variant_name)s and %(arch)s placehold‐
2513 ers.
2514
2515 symlink_isos_to
2516 (str) – If set, the ISO files from buildinstall, createiso and
2517 live_images phases will be put into this destination, and a sym‐
2518 link pointing to this location will be created in actual compose
2519 directory.
2520
2521 dogpile_cache_backend
2522 (str) – If set, Pungi will use the configured Dogpile cache
2523 backend to cache various data between multiple Pungi calls. This
2524 can make Pungi faster in case more similar composes are running
2525 regularly in short time.
2526
2527 For list of available backends, please see the
2528 https://dogpilecache.readthedocs.io documentation.
2529
2530 Most typical configuration uses the dogpile.cache.dbm backend.
2531
2532 dogpile_cache_arguments
2533 (dict) – Arguments to be used when creating the Dogpile cache
2534 backend. See the particular backend’s configuration for the
2535 list of possible key/value pairs.
2536
2537 For the dogpile.cache.dbm backend, the value can be for example
2538 following:
2539
2540 {
2541 "filename": "/tmp/pungi_cache_file.dbm"
2542 }
2543
2544 dogpile_cache_expiration_time
2545 (int) – Defines the default expiration time in seconds of data
2546 stored in the Dogpile cache. Defaults to 3600 seconds.
2547
2549 Actual Pungi configuration files can get very large. This pages brings
2550 two examples of (almost) full configuration for two different composes.
2551
2552 Fedora Rawhide compose
2553 This is a shortened configuration for Fedora Radhide compose as of
2554 2019-10-14.
2555
2556 release_name = 'Fedora'
2557 release_short = 'Fedora'
2558 release_version = 'Rawhide'
2559 release_is_layered = False
2560
2561 bootable = True
2562 comps_file = {
2563 'scm': 'git',
2564 'repo': 'https://pagure.io/fedora-comps.git',
2565 'branch': 'master',
2566 'file': 'comps-rawhide.xml',
2567 # Merge translations by running make. This command will generate the file.
2568 'command': 'make comps-rawhide.xml'
2569 }
2570 module_defaults_dir = {
2571 'scm': 'git',
2572 'repo': 'https://pagure.io/releng/fedora-module-defaults.git',
2573 'branch': 'main',
2574 'dir': '.'
2575 }
2576 # Optional module obsoletes configuration which is merged
2577 # into the module index and gets resolved
2578 module_obsoletes_dir = {
2579 'scm': 'git',
2580 'repo': 'https://pagure.io/releng/fedora-module-defaults.git',
2581 'branch': 'main',
2582 'dir': 'obsoletes'
2583 }
2584
2585 variants_file='variants-fedora.xml'
2586 sigkeys = ['12C944D0']
2587
2588 # Put packages into subdirectories hashed by their initial letter.
2589 hashed_directories = True
2590
2591 # There is a special profile for use with compose. It makes Pungi
2592 # authenticate automatically as rel-eng user.
2593 koji_profile = 'compose_koji'
2594
2595 # RUNROOT settings
2596 runroot = True
2597 runroot_channel = 'compose'
2598 runroot_tag = 'f32-build'
2599
2600 # PKGSET
2601 pkgset_source = 'koji'
2602 pkgset_koji_tag = 'f32'
2603 pkgset_koji_inherit = False
2604
2605 filter_system_release_packages = False
2606
2607 # GATHER
2608 gather_method = {
2609 '^.*': { # For all variants
2610 'comps': 'deps', # resolve dependencies for packages from comps file
2611 'module': 'nodeps', # but not for packages from modules
2612 }
2613 }
2614 gather_backend = 'dnf'
2615 gather_profiler = True
2616 check_deps = False
2617 greedy_method = 'build'
2618
2619 repoclosure_backend = 'dnf'
2620
2621 # CREATEREPO
2622 createrepo_deltas = False
2623 createrepo_database = True
2624 createrepo_use_xz = True
2625 createrepo_extra_args = ['--zck', '--zck-dict-dir=/usr/share/fedora-repo-zdicts/rawhide']
2626
2627 # CHECKSUMS
2628 media_checksums = ['sha256']
2629 media_checksum_one_file = True
2630 media_checksum_base_filename = '%(release_short)s-%(variant)s-%(version)s-%(arch)s-%(date)s%(type_suffix)s.%(respin)s'
2631
2632 # CREATEISO
2633 iso_hfs_ppc64le_compatible = False
2634
2635 # BUILDINSTALL
2636 buildinstall_method = 'lorax'
2637 buildinstall_skip = [
2638 # No installer for Modular variant
2639 ('^Modular$', {'*': True}),
2640 # No 32 bit installer for Everything.
2641 ('^Everything$', {'i386': True}),
2642 ]
2643
2644 # Enables macboot on x86_64 for all variants and disables upgrade image building
2645 # everywhere.
2646 lorax_options = [
2647 ('^.*$', {
2648 'x86_64': {
2649 'nomacboot': False
2650 },
2651 'ppc64le': {
2652 # Use 3GB image size for ppc64le.
2653 'rootfs_size': 3
2654 },
2655 '*': {
2656 'noupgrade': True
2657 }
2658 })
2659 ]
2660
2661 additional_packages = [
2662 ('^(Server|Everything)$', {
2663 '*': [
2664 # Add all architectures of dracut package.
2665 'dracut.*',
2666 # All all packages matching this pattern
2667 'autocorr-*',
2668 ],
2669 }),
2670
2671 ('^Everything$', {
2672 # Everything should include all packages from the tag. This only
2673 # applies to the native arch. Multilib will still be pulled in
2674 # according to multilib rules.
2675 '*': ['*'],
2676 }),
2677 ]
2678
2679 filter_packages = [
2680 ("^.*$", {"*": ["glibc32", "libgcc32"]}),
2681 ('(Server)$', {
2682 '*': [
2683 'kernel*debug*',
2684 'kernel-kdump*',
2685 ]
2686 }),
2687 ]
2688
2689 multilib = [
2690 ('^Everything$', {
2691 'x86_64': ['devel', 'runtime'],
2692 })
2693 ]
2694
2695 # These packages should never be multilib on any arch.
2696 multilib_blacklist = {
2697 '*': [
2698 'kernel', 'kernel-PAE*', 'kernel*debug*', 'java-*', 'php*', 'mod_*', 'ghc-*'
2699 ],
2700 }
2701
2702 # These should be multilib even if they don't match the rules defined above.
2703 multilib_whitelist = {
2704 '*': ['wine', '*-static'],
2705 }
2706
2707 createiso_skip = [
2708 # Keep binary ISOs for Server, but not source ones.
2709 ('^Server$', {'src': True}),
2710
2711 # Remove all other ISOs.
2712 ('^Everything$', {'*': True, 'src': True}),
2713 ('^Modular$', {'*': True, 'src': True}),
2714 ]
2715
2716 # Image name respecting Fedora's image naming policy
2717 image_name_format = '%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s-%(date)s%(type_suffix)s.%(respin)s.iso'
2718 # Use the same format for volume id
2719 image_volid_formats = [
2720 '%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s'
2721 ]
2722 # Used by Pungi to replace 'Cloud' with 'C' (etc.) in ISO volume IDs.
2723 # There is a hard 32-character limit on ISO volume IDs, so we use
2724 # these to try and produce short enough but legible IDs. Note this is
2725 # duplicated in Koji for live images, as livemedia-creator does not
2726 # allow Pungi to tell it what volume ID to use. Note:
2727 # https://fedoraproject.org/wiki/User:Adamwill/Draft_fedora_image_naming_policy
2728 volume_id_substitutions = {
2729 'Beta': 'B',
2730 'Rawhide': 'rawh',
2731 'Silverblue': 'SB',
2732 'Cinnamon': 'Cinn',
2733 'Cloud': 'C',
2734 'Design_suite': 'Dsgn',
2735 'Electronic_Lab': 'Elec',
2736 'Everything': 'E',
2737 'Scientific_KDE': 'SciK',
2738 'Security': 'Sec',
2739 'Server': 'S',
2740 'Workstation': 'WS',
2741 }
2742
2743 disc_types = {
2744 'boot': 'netinst',
2745 'live': 'Live',
2746 }
2747
2748 translate_paths = [
2749 ('/mnt/koji/compose/', 'https://kojipkgs.fedoraproject.org/compose/'),
2750 ]
2751
2752 # These will be inherited by live_media, live_images and image_build
2753 global_ksurl = 'git+https://pagure.io/fedora-kickstarts.git?#HEAD'
2754 global_release = '!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN'
2755 global_version = 'Rawhide'
2756 # live_images ignores this in favor of live_target
2757 global_target = 'f32'
2758
2759 image_build = {
2760 '^Container$': [
2761 {
2762 'image-build': {
2763 'format': [('docker', 'tar.xz')],
2764 'name': 'Fedora-Container-Base',
2765 'kickstart': 'fedora-container-base.ks',
2766 'distro': 'Fedora-22',
2767 'disk_size': 5,
2768 'arches': ['armhfp', 'aarch64', 'ppc64le', 's390x', 'x86_64'],
2769 'repo': 'Everything',
2770 'install_tree_from': 'Everything',
2771 'subvariant': 'Container_Base',
2772 'failable': ['*'],
2773 },
2774 'factory-parameters': {
2775 'dockerversion': "1.10.1",
2776 'docker_cmd': '[ "/bin/bash" ]',
2777 'docker_env': '[ "DISTTAG=f32container", "FGC=f32", "container=oci" ]',
2778 'docker_label': '{ "name": "fedora", "license": "MIT", "vendor": "Fedora Project", "version": "32"}',
2779 },
2780 },
2781 ],
2782 }
2783
2784 live_media = {
2785 '^Workstation$': [
2786 {
2787 'name': 'Fedora-Workstation-Live',
2788 'kickstart': 'fedora-live-workstation.ks',
2789 # Variants.xml also contains aarch64 and armhfp, but there
2790 # should be no live media for those arches.
2791 'arches': ['x86_64', 'ppc64le'],
2792 'failable': ['ppc64le'],
2793 # Take packages and install tree from Everything repo.
2794 'repo': 'Everything',
2795 'install_tree_from': 'Everything',
2796 }
2797 ],
2798 '^Spins': [
2799 # There are multiple media for Spins variant. They use subvariant
2800 # field so that they can be identified in the metadata.
2801 {
2802 'name': 'Fedora-KDE-Live',
2803 'kickstart': 'fedora-live-kde.ks',
2804 'arches': ['x86_64'],
2805 'repo': 'Everything',
2806 'install_tree_from': 'Everything',
2807 'subvariant': 'KDE'
2808
2809 },
2810 {
2811 'name': 'Fedora-Xfce-Live',
2812 'kickstart': 'fedora-live-xfce.ks',
2813 'arches': ['x86_64'],
2814 'failable': ['*'],
2815 'repo': 'Everything',
2816 'install_tree_from': 'Everything',
2817 'subvariant': 'Xfce'
2818 },
2819 ],
2820 }
2821
2822 failable_deliverables = [
2823 # Installer and ISOs for server failing do not abort the compose.
2824 ('^Server$', {
2825 '*': ['buildinstall', 'iso'],
2826 }),
2827 ('^.*$', {
2828 # Buildinstall is not blocking
2829 'src': ['buildinstall'],
2830 # Nothing on i386, ppc64le blocks the compose
2831 'i386': ['buildinstall', 'iso'],
2832 'ppc64le': ['buildinstall', 'iso'],
2833 's390x': ['buildinstall', 'iso'],
2834 })
2835 ]
2836
2837 live_target = 'f32'
2838 live_images_no_rename = True
2839 live_images = [
2840 ('^Workstation$', {
2841 'armhfp': {
2842 'kickstart': 'fedora-arm-workstation.ks',
2843 'name': 'Fedora-Workstation-armhfp',
2844 # Again workstation takes packages from Everything.
2845 'repo': 'Everything',
2846 'type': 'appliance',
2847 'failable': True,
2848 }
2849 }),
2850 ('^Server$', {
2851 # But Server has its own repo.
2852 'armhfp': {
2853 'kickstart': 'fedora-arm-server.ks',
2854 'name': 'Fedora-Server-armhfp',
2855 'type': 'appliance',
2856 'failable': True,
2857 }
2858 }),
2859 ]
2860
2861 ostree = {
2862 "^Silverblue$": {
2863 "version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
2864 # To get config, clone master branch from this repo and take
2865 # treefile from there.
2866 "treefile": "fedora-silverblue.yaml",
2867 "config_url": "https://pagure.io/workstation-ostree-config.git",
2868 "config_branch": "master",
2869 # Consume packages from Everything
2870 "repo": "Everything",
2871 # Don't create a reference in the ostree repo (signing automation does that).
2872 "tag_ref": False,
2873 # Don't use change detection in ostree.
2874 "force_new_commit": True,
2875 # Use unified core mode for rpm-ostree composes
2876 "unified_core": True,
2877 # This is the location for the repo where new commit will be
2878 # created. Note that this is outside of the compose dir.
2879 "ostree_repo": "/mnt/koji/compose/ostree/repo/",
2880 "ostree_ref": "fedora/rawhide/${basearch}/silverblue",
2881 "arches": ["x86_64", "ppc64le", "aarch64"],
2882 "failable": ['*'],
2883 }
2884 }
2885
2886 ostree_installer = [
2887 ("^Silverblue$", {
2888 "x86_64": {
2889 "repo": "Everything",
2890 "release": None,
2891 "rootfs_size": "8",
2892 # Take templates from this repository.
2893 'template_repo': 'https://pagure.io/fedora-lorax-templates.git',
2894 'template_branch': 'master',
2895 # Use following templates.
2896 "add_template": ["ostree-based-installer/lorax-configure-repo.tmpl",
2897 "ostree-based-installer/lorax-embed-repo.tmpl",
2898 "ostree-based-installer/lorax-embed-flatpaks.tmpl"],
2899 # And add these variables for the templates.
2900 "add_template_var": [
2901 "ostree_install_repo=https://kojipkgs.fedoraproject.org/compose/ostree/repo/",
2902 "ostree_update_repo=https://ostree.fedoraproject.org",
2903 "ostree_osname=fedora",
2904 "ostree_oskey=fedora-32-primary",
2905 "ostree_contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist",
2906 "ostree_install_ref=fedora/rawhide/x86_64/silverblue",
2907 "ostree_update_ref=fedora/rawhide/x86_64/silverblue",
2908 "flatpak_remote_name=fedora",
2909 "flatpak_remote_url=oci+https://registry.fedoraproject.org",
2910 "flatpak_remote_refs=runtime/org.fedoraproject.Platform/x86_64/f30 app/org.gnome.Baobab/x86_64/stable",
2911 ],
2912 'failable': ['*'],
2913 },
2914 })
2915 ]
2916
2917 RCM Tools compose
2918 This is a small compose used to deliver packages to Red Hat internal
2919 users. The configuration is split into two files.
2920
2921 # rcmtools-common.conf
2922
2923 release_name = "RCM Tools"
2924 release_short = "RCMTOOLS"
2925 release_version = "2.0"
2926 release_type = "updates"
2927 release_is_layered = True
2928 createrepo_c = True
2929 createrepo_checksum = "sha256"
2930
2931 # PKGSET
2932 pkgset_source = "koji"
2933 koji_profile = "brew"
2934 pkgset_koji_inherit = True
2935
2936
2937 # GENERAL SETTINGS
2938 bootable = False
2939 comps_file = "rcmtools-comps.xml"
2940 variants_file = "rcmtools-variants.xml"
2941 sigkeys = ["3A3A33A3"]
2942
2943
2944 # RUNROOT settings
2945 runroot = False
2946
2947
2948 # GATHER
2949 gather_method = "deps"
2950 check_deps = True
2951
2952 additional_packages = [
2953 ('.*', {
2954 '*': ['puddle', 'rcm-nexus'],
2955 }
2956 ),
2957 ]
2958
2959 # Set repoclosure_strictness to fatal to avoid installation dependency
2960 # issues in production composes
2961 repoclosure_strictness = [
2962 ("^.*$", {
2963 "*": "fatal"
2964 })
2965 ]
2966
2967 Configuration specific for different base products is split into sepa‐
2968 rate files.
2969
2970 # rcmtools-common.conf
2971 from rcmtools-common import *
2972
2973 # BASE PRODUCT
2974 base_product_name = "Red Hat Enterprise Linux"
2975 base_product_short = "RHEL"
2976 base_product_version = "7"
2977
2978 # PKGSET
2979 pkgset_koji_tag = "rcmtools-rhel-7-compose"
2980
2981 # remove i386 arch on rhel7
2982 tree_arches = ["aarch64", "ppc64le", "s390x", "x86_64"]
2983
2984 check_deps = False
2985
2986 # Packages in these repos are available to satisfy dependencies inside the
2987 # compose, but will not be pulled in.
2988 gather_lookaside_repos = [
2989 ("^Client|Client-optional$", {
2990 "x86_64": [
2991 "http://example.redhat.com/rhel/7/Client/x86_64/os/",
2992 "http://example.redhat.com/rhel/7/Client/x86_64/optional/os/",
2993 ],
2994 }),
2995 ("^Workstation|Workstation-optional$", {
2996 "x86_64": [
2997 "http://example.redhat.com/rhel/7/Workstation/x86_64/os/",
2998 "http://example.redhat.com/rhel/7/Workstation/x86_64/optional/os/",
2999 ],
3000 }),
3001 ("^Server|Server-optional$", {
3002 "aarch64": [
3003 "http://example.redhat.com/rhel/7/Server/aarch64/os/",
3004 "http://example.redhat.com/rhel/7/Server/aarch64/optional/os/",
3005 ],
3006 "ppc64": [
3007 "http://example.redhat.com/rhel/7/Server/ppc64/os/",
3008 "http://example.redhat.com/rhel/7/Server/ppc64/optional/os/",
3009 ],
3010 "ppc64le": [
3011 "http://example.redhat.com/rhel/7/Server/ppc64le/os/",
3012 "http://example.redhat.com/rhel/7/Server/ppc64le/optional/os/",
3013 ],
3014 "s390x": [
3015 "http://example.redhat.com/rhel/7/Server/s390x/os/",
3016 "http://example.redhat.com/rhel/7/Server/s390x/optional/os/",
3017 ],
3018 "x86_64": [
3019 "http://example.redhat.com/rhel/7/Server/x86_64/os/",
3020 "http://example.redhat.com/rhel/7/Server/x86_64/optional/os/",
3021 ],
3022 })
3023 ]
3024
3026 Multiple places in Pungi can use files from external storage. The con‐
3027 figuration is similar independently of the backend that is used, al‐
3028 though some features may be different.
3029
3030 The so-called scm_dict is always put into configuration as a dictio‐
3031 nary, which can contain following keys.
3032
3033 • scm – indicates which SCM system is used. This is always required.
3034 Allowed values are:
3035
3036 • file – copies files from local filesystem
3037
3038 • git – copies files from a Git repository
3039
3040 • cvs – copies files from a CVS repository
3041
3042 • rpm – copies files from a package in the compose
3043
3044 • koji – downloads archives from a given build in Koji build system
3045
3046 • repo
3047
3048 • for Git and CVS backends this should be URL to the repository
3049
3050 • for RPM backend this should be a shell style glob matching package
3051 names (or a list of such globs)
3052
3053 • for file backend this should be empty
3054
3055 • for Koji backend this should be an NVR or package name
3056
3057 • branch
3058
3059 • branch name for Git and CVS backends, with master and HEAD as de‐
3060 faults
3061
3062 • Koji tag for koji backend if only package name is given
3063
3064 • otherwise should not be specified
3065
3066 • file – a list of files that should be exported.
3067
3068 • dir – a directory that should be exported. All its contents will be
3069 exported. This option is mutually exclusive with file.
3070
3071 • command – defines a shell command to run after Git clone to generate
3072 the needed file (for example to run make). Only supported in Git
3073 backend.
3074
3075 • options – a dictionary of additional configuration options. These are
3076 specific to different backends.
3077
3078 Currently supported values for Git:
3079
3080 • credential_helper – path to a credential helper used to supply
3081 username/password for remotes that require authentication.
3082
3083 Koji examples
3084 There are two different ways how to configure the Koji backend.
3085
3086 {
3087 # Download all *.tar files from build my-image-1.0-1.
3088 "scm": "koji",
3089 "repo": "my-image-1.0-1",
3090 "file": "*.tar",
3091 }
3092
3093 {
3094 # Find latest build of my-image in tag my-tag and take files from
3095 # there.
3096 "scm": "koji",
3097 "repo": "my-image",
3098 "branch": "my-tag",
3099 "file": "*.tar",
3100 }
3101
3102 Using both tag name and exact NVR will result in error: the NVR would
3103 be interpreted as a package name, and would not match anything.
3104
3105 file vs. dir
3106 Exactly one of these two options has to be specified. Documentation for
3107 each configuration option should specify whether it expects a file or a
3108 directory.
3109
3110 For extra_files phase either key is valid and should be chosen depend‐
3111 ing on what the actual use case.
3112
3113 Caveats
3114 The rpm backend can only be used in phases that would extract the files
3115 after pkgset phase finished. You can’t get comps file from a package.
3116
3117 Depending on Git repository URL configuration Pungi can only export the
3118 requested content using git archive. When a command should run this is
3119 not possible and a clone is always needed.
3120
3121 When using koji backend, it is required to provide configuration for
3122 Koji profile to be used (koji_profile). It is not possible to contact
3123 multiple different Koji instances.
3124
3126 Pungi has the ability to emit notification messages about progress and
3127 general status of the compose. These can be used to e.g. send messages
3128 to fedmsg. This is implemented by actually calling a separate script.
3129
3130 The script will be called with one argument describing action that just
3131 happened. A JSON-encoded object will be passed to standard input to
3132 provide more information about the event. At the very least, the object
3133 will contain a compose_id key.
3134
3135 The notification script inherits working directory from the parent
3136 process and it can be called from the same directory pungi-koji is
3137 called from. The working directory is listed at the start of main log.
3138
3139 Currently these messages are sent:
3140
3141 • status-change – when composing starts, finishes or fails; a status
3142 key is provided to indicate details
3143
3144 • phase-start – on start of a phase
3145
3146 • phase-stop – when phase is finished
3147
3148 • createiso-targets – with a list of images to be created
3149
3150 • createiso-imagedone – when any single image is finished
3151
3152 • createiso-imagefail – when any single image fails to create
3153
3154 • fail-to-start – when there are incorrect CLI options or errors in
3155 configuration file; this message does not contain compose_id nor
3156 is it started in the compose directory (which does not exist yet)
3157
3158 • ostree – when a new commit is created, this message will announce
3159 its hash and the name of ref it is meant for.
3160
3161 For phase related messages phase_name key is provided as well.
3162
3163 A pungi-fedmsg-notification script is provided and understands this in‐
3164 terface.
3165
3166 Setting it up
3167 The script should be provided as a command line argument --notifica‐
3168 tion-script.
3169
3170 --notification-script=pungi-fedmsg-notification
3171
3173 A compose created by Pungi consists of one or more variants. A variant
3174 contains a subset of the content targeted at a particular use case.
3175
3176 There are different types of variants. The type affects how packages
3177 are gathered into the variant.
3178
3179 The inputs for gathering are defined by various gather sources. Pack‐
3180 ages from all sources are collected to create a big list of package
3181 names, comps groups names and a list of packages that should be fil‐
3182 tered out.
3183
3184 NOTE:
3185 The inputs for both explicit package list and comps file are inter‐
3186 preted as RPM names, not any arbitrary provides nor source package
3187 name.
3188
3189 Next, gather_method defines how the list is processed. For nodeps, the
3190 results from source are used pretty much as is [1]. For deps method, a
3191 process will be launched to figure out what dependencies are needed and
3192 those will be pulled in.
3193
3194 [1] The lists are filtered based on what packages are available in the
3195 package set, but nothing else will be pulled in.
3196
3197 Variant types
3198 Variant
3199 is a base type that has no special behaviour.
3200
3201 Addon is built on top of a regular variant. Any packages that should
3202 go to both the addon and its parent will be removed from addon.
3203 Packages that are only in addon but pulled in because of
3204 gather_fulltree option will be moved to parent.
3205
3206 Integrated Layered Product
3207 works similarly to addon. Additionally, all packages from addons
3208 on the same parent variant are removed integrated layered prod‐
3209 ucts.
3210
3211 The main difference between an addon and integrated layered
3212 product is that integrated layered product has its own identity
3213 in the metadata (defined with product name and version).
3214
3215 NOTE:
3216 There’s also Layered Product as a term, but this is not re‐
3217 lated to variants. It’s used to describe a product that is
3218 not a standalone operating system and is instead meant to be
3219 used on some other base system.
3220
3221 Optional
3222 contains packages that complete the base variants’ package set.
3223 It always has fulltree and selfhosting enabled, so it contains
3224 build dependencies and packages which were not specifically re‐
3225 quested for base variant.
3226
3227 Some configuration options are overridden for particular variant types.
3228
3229 Depsolving configuration
3230 ┌──────────┬──────────────┬──────────────┐
3231 │Variant │ Fulltree │ Selfhosting │
3232 ├──────────┼──────────────┼──────────────┤
3233 │base │ configurable │ configurable │
3234 ├──────────┼──────────────┼──────────────┤
3235 │addon/ILP │ enabled │ disabled │
3236 ├──────────┼──────────────┼──────────────┤
3237 │optional │ enabled │ enabled │
3238 └──────────┴──────────────┴──────────────┘
3239
3240 Profiling
3241 Profiling data on the pungi-gather tool can be enabled by setting the
3242 gather_profiler configuration option to True.
3243
3244 Modular compose
3245 A compose with gather_source set to module is called modular. The pack‐
3246 age list is determined by a list of modules.
3247
3248 The list of modules that will be put into a variant is defined in the
3249 variants.xml file. The file can contain either Name:Stream or
3250 Name:Stream:Version references. See Module Naming Policy for details.
3251 When Version is missing from the specification, Pungi will ask PDC for
3252 the latest one.
3253
3254 The module metadata in PDC contains a list of RPMs in the module as
3255 well as Koji tag from which the packages can be retrieved.
3256
3257 Restrictions
3258 • A modular compose must always use Koji as a package set source.
3259
3261 When Pungi is configured to get packages from a Koji tag, it somehow
3262 needs to access the actual RPM files.
3263
3264 Historically, this required the storage used by Koji to be directly
3265 available on the host where Pungi was running. This was usually
3266 achieved by using NFS for the Koji volume, and mounting it on the com‐
3267 pose host.
3268
3269 The compose could be created directly on the same volume. In such case
3270 the packages would be hardlinked, significantly reducing space consump‐
3271 tion.
3272
3273 The compose could also be created on a different storage, in which case
3274 the packages would either need to be copied over or symlinked. Using
3275 symlinks requires that anything that accesses the compose (e.g. a down‐
3276 load server) would also need to mount the Koji volume in the same loca‐
3277 tion.
3278
3279 There is also a risk with symlinks that the package in Koji can change
3280 (due to being resigned for example), which would invalidate composes
3281 linking to it.
3282
3283 Using Koji without direct mount
3284 It is possible now to run a compose from a Koji tag without direct ac‐
3285 cess to Koji storage.
3286
3287 Pungi can download the packages over HTTP protocol, store them in a lo‐
3288 cal cache, and consume them from there.
3289
3290 The local cache has similar structure to what is on the Koji volume.
3291
3292 When Pungi needs some package, it has a path on Koji volume. It will
3293 replace the topdir with the cache location. If such file exists, it
3294 will be used. If it doesn’t exist, it will be downloaded from Koji (by
3295 replacing the topdir with topurl).
3296
3297 Koji path /mnt/koji/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
3298 Koji URL https://kojipkgs.fedoraproject.org/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
3299 Local path /mnt/compose/cache/packages/foo/1/1.fc38/data/signed/abcdef/noarch/foo-1-1.fc38.noarch.rpm
3300
3301 The packages can be hardlinked from this cache directory.
3302
3303 Cleanup
3304 While the approach above allows each RPM to be downloaded only once, it
3305 will eventually result in the Koji volume being mirrored locally. Most
3306 of the packages will however no longer be needed.
3307
3308 There is a script pungi-cache-cleanup that can help with that. It can
3309 find and remove files from the cache that are no longer needed.
3310
3311 A file is no longer needed if it has a single link (meaning it is only
3312 in the cache, not in any compose), and it has mtime older than a given
3313 threshold.
3314
3315 It doesn’t make sense to delete files that are hardlinked in an exist‐
3316 ing compose as it would not save any space anyway.
3317
3318 The mtime check is meant to preserve files that are downloaded but not
3319 actually used in a compose, like a subpackage that is not included in
3320 any variant. Every time its existence in the local cache is checked,
3321 the mtime is updated.
3322
3323 Race conditions?
3324 It should be safe to have multiple compose hosts share the same storage
3325 volume for generated composes and local cache.
3326
3327 If a cache file is accessed and it exists, there’s no risk of race con‐
3328 dition.
3329
3330 If two composes need the same file at the same time and it is not
3331 present yet, one of them will take a lock on it and start downloading.
3332 The other will wait until the download is finished.
3333
3334 The lock is only valid for a set amount of time (5 minutes) to avoid
3335 issues where the downloading process is killed in a way that blocks it
3336 from releasing the lock.
3337
3338 If the file is large and network slow, the limit may not be enough fin‐
3339 ish downloading. In that case the second process will steal the lock
3340 while the first process is still downloading. This will result in the
3341 same file being downloaded twice.
3342
3343 When the first process finishes the download, it will put the file into
3344 the local cache location. When the second process finishes, it will
3345 atomically replace it, but since it’s the same file it will be the same
3346 file.
3347
3348 If the first compose already managed to hardlink the file before it
3349 gets replaced, there will be two copies of the file present locally.
3350
3351 Integrity checking
3352 There is minimal integrity checking. RPM packages belonging to real
3353 builds will be check to match the checksum provided by Koji hub.
3354
3355 There is no checking for scratch builds or any images.
3356
3358 The comps file that Pungi takes as input is not really pure comps as
3359 used by tools like DNF. There are extensions used to customize how the
3360 file is processed.
3361
3362 The first step of Pungi processing is to retrieve the actual file. This
3363 can use anything that Exporting files from SCM supports.
3364
3365 Pungi extensions are arch attribute on packageref, group and environ‐
3366 ment tags. The value of this attribute is a comma separated list of ar‐
3367 chitectures.
3368
3369 Second step Pungi performs is creating a file for each architecture.
3370 This is done by removing all elements with incompatible arch attribute.
3371 No additional clean up is performed on this file. The resulting file is
3372 only used internally for the rest of the compose process.
3373
3374 Third and final step is to create comps file for each Variant.Arch com‐
3375 bination. This is the actual file that will be included in the com‐
3376 pose. The start file is the original input file, from which all ele‐
3377 ments with incompatible architecture are removed. Then clean up is per‐
3378 formed by removing all empty groups, removing non-existing groups from
3379 environments and categories and finally removing empty environments and
3380 categories. As a last step groups not listed in the variants file are
3381 removed.
3382
3384 Set up development environment
3385 In order to work on Pungi, you should install recent version of Fedora.
3386
3387 Python2
3388 Fedora 29 is recommended because some packages are not available in
3389 newer Fedora release, e.g. python2-libcomps.
3390
3391 Install required packages
3392
3393 $ sudo dnf install -y krb5-devel gcc make libcurl-devel python2-devel python2-createrepo_c kobo-rpmlib yum python2-libcomps python2-libselinx
3394
3395 Python3
3396 Install required packages
3397
3398 $ sudo dnf install -y krb5-devel gcc make libcurl-devel python3-devel python3-createrepo_c python3-libcomps
3399
3400 Developing
3401 Currently the development workflow for Pungi is on master branch:
3402
3403 • Make your own fork at https://pagure.io/pungi
3404
3405 • Clone your fork locally (replacing $USERNAME with your own):
3406
3407 git clone git@pagure.io:forks/$USERNAME/pungi.git
3408
3409 • cd into your local clone and add the remote upstream for rebasing:
3410
3411 cd pungi
3412 git remote add upstream git@pagure.io:pungi.git
3413
3414 NOTE:
3415 This workflow assumes that you never git commit directly to the
3416 master branch of your fork. This will make more sense when we
3417 cover rebasing below.
3418
3419 • create a topic branch based on master:
3420
3421 git branch my_topic_branch master
3422 git checkout my_topic_branch
3423
3424 • Make edits, changes, add new features, etc. and then make sure to
3425 pull from upstream master and rebase before submitting a pull re‐
3426 quest:
3427
3428 # lets just say you edited setup.py for sake of argument
3429 git checkout my_topic_branch
3430
3431 # make changes to setup.py
3432 black setup.py
3433 tox
3434 git add setup.py
3435 git commit -s -m "added awesome feature to setup.py"
3436
3437 # now we rebase
3438 git checkout master
3439 git pull --rebase upstream master
3440 git push origin master
3441 git push origin --tags
3442 git checkout my_topic_branch
3443 git rebase master
3444
3445 # resolve merge conflicts if any as a result of your development in
3446 # your topic branch
3447 git push origin my_topic_branch
3448
3449 NOTE:
3450 In order to for your commit to be merged:
3451
3452 • you must sign-off on it. Use -s option when running git commit.
3453
3454 • The code must be well formatted via black and pass flake8 check‐
3455 ing. Run tox -e black,flake8 to do the check.
3456
3457 • Create pull request in the pagure.io web UI
3458
3459 • For convenience, here is a bash shell function that can be placed in
3460 your ~/.bashrc and called such as pullupstream pungi-4-devel that
3461 will automate a large portion of the rebase steps from above:
3462
3463 pullupstream () {
3464 if [[ -z "$1" ]]; then
3465 printf "Error: must specify a branch name (e.g. - master, devel)\n"
3466 else
3467 pullup_startbranch=$(git describe --contains --all HEAD)
3468 git checkout $1
3469 git pull --rebase upstream master
3470 git push origin $1
3471 git push origin --tags
3472 git checkout ${pullup_startbranch}
3473 fi
3474 }
3475
3476 Testing
3477 You must write unit tests for any new code (except for trivial
3478 changes). Any code without sufficient test coverage may not be merged.
3479
3480 To run all existing tests, suggested method is to use tox.
3481
3482 $ sudo dnf install python3-tox -y
3483
3484 $ tox -e py3
3485 $ tox -e py27
3486
3487 Alternatively you could create a vitualenv, install deps and run tests
3488 manually if you don’t want to use tox.
3489
3490 $ sudo dnf install python3-virtualenvwrapper -y
3491 $ mkvirtualenv --system-site-packages py3
3492 $ workon py3
3493 $ pip install -r requirements.txt -r test-requirements.txt
3494 $ make test
3495
3496 # or with coverage
3497 $ make test-coverage
3498
3499 If you need to run specified tests, pytest is recommended.
3500
3501 # Activate virtualenv first
3502
3503 # Run tests
3504 $ pytest tests/test_config.py
3505 $ pytest tests/test_config.py -k test_pkgset_mismatch_repos
3506
3507 In the tests/ directory there is a shell script test_compose.sh that
3508 you can use to try and create a miniature compose on dummy data. The
3509 actual data will be created by running make test-data in project root.
3510
3511 $ sudo dnf -y install rpm-build createrepo_c isomd5sum genisoimage syslinux
3512
3513 # Activate virtualenv (the one created by tox could be used)
3514 $ source .tox/py3/bin/activate
3515
3516 $ python setup.py develop
3517 $ make test-data
3518 $ make test-compose
3519
3520 This testing compose does not actually use all phases that are avail‐
3521 able, and there is no checking that the result is correct. It only
3522 tells you whether it crashed or not.
3523
3524 NOTE:
3525 Even when it finishes successfully, it may print errors about re‐
3526 poclosure on Server-Gluster.x86_64 in test phase. This is not a bug.
3527
3528 Documenting
3529 You must write documentation for any new features and functional
3530 changes. Any code without sufficient documentation may not be merged.
3531
3532 To generate the documentation, run make doc in project root.
3533
3535 Test Data
3536 Tests require test data and not all of it is available in git. You
3537 must create test repositories before running the tests:
3538
3539 make test-data
3540
3541 Requirements: createrepo_c, rpmbuild
3542
3543 Unit Tests
3544 Unit tests cover functionality of Pungi python modules. You can run
3545 all of them at once:
3546
3547 make test
3548
3549 which is shortcut to:
3550
3551 python2 setup.py test
3552 python3 setup.py test
3553
3554 You can alternatively run individual tests:
3555
3556 cd tests
3557 ./<test>.py [<class>[.<test>]]
3558
3559 Functional Tests
3560 Because compose is quite complex process and not everything is covered
3561 with unit tests yet, the easiest way how to test if your changes did
3562 not break anything badly is to start a compose on a relatively small
3563 and well defined package set:
3564
3565 cd tests
3566 ./test_compose.sh
3567
3569 Daniel Mach
3570
3572 2023, Red Hat, Inc.
3573
3574
3575
3576
35774.5 Sep 25, 2023 PUNGI(1)