1ZSTD(1) User Commands ZSTD(1)
2
3
4
6 zstd - zstd, zstdmt, unzstd, zstdcat - Compress or decompress .zst
7 files
8
10 zstd [OPTIONS] [-|INPUT-FILE] [-o OUTPUT-FILE]
11
12 zstdmt is equivalent to zstd -T0
13
14 unzstd is equivalent to zstd -d
15
16 zstdcat is equivalent to zstd -dcf
17
19 zstd is a fast lossless compression algorithm and data compression
20 tool, with command line syntax similar to gzip (1) and xz (1). It is
21 based on the LZ77 family, with further FSE & huff0 entropy stages. zstd
22 offers highly configurable compression speed, with fast modes at > 200
23 MB/s per core, and strong modes nearing lzma compression ratios. It
24 also features a very fast decoder, with speeds > 500 MB/s per core.
25
26 zstd command line syntax is generally similar to gzip, but features the
27 following differences :
28
29 • Source files are preserved by default. It´s possible to remove them
30 automatically by using the --rm command.
31
32 • When compressing a single file, zstd displays progress notifica‐
33 tions and result summary by default. Use -q to turn them off.
34
35 • zstd does not accept input from console, but it properly accepts
36 stdin when it´s not the console.
37
38 • zstd displays a short help page when command line is an error. Use
39 -q to turn it off.
40
41
42
43 zstd compresses or decompresses each file according to the selected op‐
44 eration mode. If no files are given or file is -, zstd reads from stan‐
45 dard input and writes the processed data to standard output. zstd will
46 refuse to write compressed data to standard output if it is a terminal
47 : it will display an error message and skip the file. Similarly, zstd
48 will refuse to read compressed data from standard input if it is a ter‐
49 minal.
50
51 Unless --stdout or -o is specified, files are written to a new file
52 whose name is derived from the source file name:
53
54 • When compressing, the suffix .zst is appended to the source file‐
55 name to get the target filename.
56
57 • When decompressing, the .zst suffix is removed from the source
58 filename to get the target filename
59
60
61
62 Concatenation with .zst files
63 It is possible to concatenate .zst files as is. zstd will decompress
64 such files as if they were a single .zst file.
65
67 Integer suffixes and special values
68 In most places where an integer argument is expected, an optional suf‐
69 fix is supported to easily indicate large integers. There must be no
70 space between the integer and the suffix.
71
72 KiB Multiply the integer by 1,024 (2^10). Ki, K, and KB are accepted
73 as synonyms for KiB.
74
75 MiB Multiply the integer by 1,048,576 (2^20). Mi, M, and MB are ac‐
76 cepted as synonyms for MiB.
77
78 Operation mode
79 If multiple operation mode options are given, the last one takes ef‐
80 fect.
81
82 -z, --compress
83 Compress. This is the default operation mode when no operation
84 mode option is specified and no other operation mode is implied
85 from the command name (for example, unzstd implies --decom‐
86 press).
87
88 -d, --decompress, --uncompress
89 Decompress.
90
91 -t, --test
92 Test the integrity of compressed files. This option is equiva‐
93 lent to --decompress --stdout except that the decompressed data
94 is discarded instead of being written to standard output. No
95 files are created or removed.
96
97 -b# Benchmark file(s) using compression level #
98
99 --train FILEs
100 Use FILEs as a training set to create a dictionary. The training
101 set should contain a lot of small files (> 100).
102
103 -l, --list
104 Display information related to a zstd compressed file, such as
105 size, ratio, and checksum. Some of these fields may not be
106 available. This command can be augmented with the -v modifier.
107
108 Operation modifiers
109 • -#: # compression level [1-19] (default: 3)
110
111 • --ultra: unlocks high compression levels 20+ (maximum 22), using a
112 lot more memory. Note that decompression will also require more
113 memory when using these levels.
114
115 • --fast[=#]: switch to ultra-fast compression levels. If =# is not
116 present, it defaults to 1. The higher the value, the faster the
117 compression speed, at the cost of some compression ratio. This set‐
118 ting overwrites compression level if one was set previously. Simi‐
119 larly, if a compression level is set after --fast, it overrides it.
120
121 • -T#, --threads=#: Compress using # working threads (default: 1). If
122 # is 0, attempt to detect and use the number of physical CPU cores.
123 In all cases, the nb of threads is capped to ZSTDMT_NBWORKERS_MAX,
124 which is either 64 in 32-bit mode, or 256 for 64-bit environments.
125 This modifier does nothing if zstd is compiled without multithread
126 support.
127
128 • --single-thread: Does not spawn a thread for compression, use a
129 single thread for both I/O and compression. In this mode, compres‐
130 sion is serialized with I/O, which is slightly slower. (This is
131 different from -T1, which spawns 1 compression thread in parallel
132 of I/O). This mode is the only one available when multithread sup‐
133 port is disabled. Single-thread mode features lower memory usage.
134 Final compressed result is slightly different from -T1.
135
136 • --auto-threads={physical,logical} (default: physical): When using a
137 default amount of threads via -T0, choose the default based on the
138 number of detected physical or logical cores.
139
140 • --adapt[=min=#,max=#] : zstd will dynamically adapt compression
141 level to perceived I/O conditions. Compression level adaptation can
142 be observed live by using command -v. Adaptation can be constrained
143 between supplied min and max levels. The feature works when com‐
144 bined with multi-threading and --long mode. It does not work with
145 --single-thread. It sets window size to 8 MB by default (can be
146 changed manually, see wlog). Due to the chaotic nature of dynamic
147 adaptation, compressed result is not reproducible. note : at the
148 time of this writing, --adapt can remain stuck at low speed when
149 combined with multiple worker threads (>=2).
150
151 • --long[=#]: enables long distance matching with # windowLog, if not
152 # is not present it defaults to 27. This increases the window size
153 (windowLog) and memory usage for both the compressor and decompres‐
154 sor. This setting is designed to improve the compression ratio for
155 files with long matches at a large distance.
156
157 Note: If windowLog is set to larger than 27, --long=windowLog or
158 --memory=windowSize needs to be passed to the decompressor.
159
160 • -D DICT: use DICT as Dictionary to compress or decompress FILE(s)
161
162 • --patch-from FILE: Specify the file to be used as a reference point
163 for zstd´s diff engine. This is effectively dictionary compression
164 with some convenient parameter selection, namely that windowSize >
165 srcSize.
166
167 Note: cannot use both this and -D together Note: --long mode will
168 be automatically activated if chainLog < fileLog (fileLog being the
169 windowLog required to cover the whole file). You can also manually
170 force it. Node: for all levels, you can use --patch-from in --sin‐
171 gle-thread mode to improve compression ratio at the cost of speed
172 Note: for level 19, you can get increased compression ratio at the
173 cost of speed by specifying --zstd=targetLength= to be something
174 large (i.e 4096), and by setting a large --zstd=chainLog=
175
176 • --rsyncable : zstd will periodically synchronize the compression
177 state to make the compressed file more rsync-friendly. There is a
178 negligible impact to compression ratio, and the faster compression
179 levels will see a small compression speed hit. This feature does
180 not work with --single-thread. You probably don´t want to use it
181 with long range mode, since it will decrease the effectiveness of
182 the synchronization points, but your mileage may vary.
183
184 • -C, --[no-]check: add integrity check computed from uncompressed
185 data (default: enabled)
186
187 • --[no-]content-size: enable / disable whether or not the original
188 size of the file is placed in the header of the compressed file.
189 The default option is --content-size (meaning that the original
190 size will be placed in the header).
191
192 • --no-dictID: do not store dictionary ID within frame header (dic‐
193 tionary compression). The decoder will have to rely on implicit
194 knowledge about which dictionary to use, it won´t be able to check
195 if it´s correct.
196
197 • -M#, --memory=#: Set a memory usage limit. By default, Zstandard
198 uses 128 MB for decompression as the maximum amount of memory the
199 decompressor is allowed to use, but you can override this manually
200 if need be in either direction (ie. you can increase or decrease
201 it).
202
203 This is also used during compression when using with --patch-from=.
204 In this case, this parameter overrides that maximum size allowed
205 for a dictionary. (128 MB).
206
207 Additionally, this can be used to limit memory for dictionary
208 training. This parameter overrides the default limit of 2 GB. zstd
209 will load training samples up to the memory limit and ignore the
210 rest.
211
212 • --stream-size=# : Sets the pledged source size of input coming from
213 a stream. This value must be exact, as it will be included in the
214 produced frame header. Incorrect stream sizes will cause an error.
215 This information will be used to better optimize compression param‐
216 eters, resulting in better and potentially faster compression, es‐
217 pecially for smaller source sizes.
218
219 • --size-hint=#: When handling input from a stream, zstd must guess
220 how large the source size will be when optimizing compression pa‐
221 rameters. If the stream size is relatively small, this guess may be
222 a poor one, resulting in a higher compression ratio than expected.
223 This feature allows for controlling the guess when needed. Exact
224 guesses result in better compression ratios. Overestimates result
225 in slightly degraded compression ratios, while underestimates may
226 result in significant degradation.
227
228 • -o FILE: save result into FILE
229
230 • -f, --force: disable input and output checks. Allows overwriting
231 existing files, input from console, output to stdout, operating on
232 links, block devices, etc.
233
234 • -c, --stdout: write to standard output (even if it is the console)
235
236 • --[no-]sparse: enable / disable sparse FS support, to make files
237 with many zeroes smaller on disk. Creating sparse files may save
238 disk space and speed up decompression by reducing the amount of
239 disk I/O. default: enabled when output is into a file, and disabled
240 when output is stdout. This setting overrides default and can force
241 sparse mode over stdout.
242
243 • --rm: remove source file(s) after successful compression or decom‐
244 pression. If used in combination with -o, will trigger a confirma‐
245 tion prompt (which can be silenced with -f), as this is a destruc‐
246 tive operation.
247
248 • -k, --keep: keep source file(s) after successful compression or de‐
249 compression. This is the default behavior.
250
251 • -r: operate recursively on directories. It selects all files in the
252 named directory and all its subdirectories. This can be useful both
253 to reduce command line typing, and to circumvent shell expansion
254 limitations, when there are a lot of files and naming breaks the
255 maximum size of a command line.
256
257 • --filelist FILE read a list of files to process as content from
258 FILE. Format is compatible with ls output, with one file per line.
259
260 • --output-dir-flat DIR: resulting files are stored into target DIR
261 directory, instead of same directory as origin file. Be aware that
262 this command can introduce name collision issues, if multiple
263 files, from different directories, end up having the same name.
264 Collision resolution ensures first file with a given name will be
265 present in DIR, while in combination with -f, the last file will be
266 present instead.
267
268 • --output-dir-mirror DIR: similar to --output-dir-flat, the output
269 files are stored underneath target DIR directory, but this option
270 will replicate input directory hierarchy into output DIR.
271
272 If input directory contains "..", the files in this directory will
273 be ignored. If input directory is an absolute directory (i.e.
274 "/var/tmp/abc"), it will be stored into the "out‐
275 put-dir/var/tmp/abc". If there are multiple input files or directo‐
276 ries, name collision resolution will follow the same rules as
277 --output-dir-flat.
278
279 • --format=FORMAT: compress and decompress in other formats. If com‐
280 piled with support, zstd can compress to or decompress from other
281 compression algorithm formats. Possibly available options are zstd,
282 gzip, xz, lzma, and lz4. If no such format is provided, zstd is the
283 default.
284
285 • -h/-H, --help: display help/long help and exit
286
287 • -V, --version: display version number and exit. Advanced : -vV also
288 displays supported formats. -vvV also displays POSIX support. -q
289 will only display the version number, suitable for machine reading.
290
291 • -v, --verbose: verbose mode, display more information
292
293 • -q, --quiet: suppress warnings, interactivity, and notifications.
294 specify twice to suppress errors too.
295
296 • --no-progress: do not display the progress bar, but keep all other
297 messages.
298
299 • --show-default-cparams: Shows the default compression parameters
300 that will be used for a particular src file. If the provided src
301 file is not a regular file (eg. named pipe), the cli will just out‐
302 put the default parameters. That is, the parameters that are used
303 when the src size is unknown.
304
305 • --: All arguments after -- are treated as files
306
307
309 Additional options for the pzstd utility
310
311 -p, --processes
312 number of threads to use for (de)compression (default:4)
313
314
315
316
317 Restricted usage of Environment Variables
318 Using environment variables to set parameters has security implica‐
319 tions. Therefore, this avenue is intentionally restricted. Only
320 ZSTD_CLEVEL and ZSTD_NBTHREADS are currently supported. They set the
321 compression level and number of threads to use during compression, re‐
322 spectively.
323
324 ZSTD_CLEVEL can be used to set the level between 1 and 19 (the "normal"
325 range). If the value of ZSTD_CLEVEL is not a valid integer, it will be
326 ignored with a warning message. ZSTD_CLEVEL just replaces the default
327 compression level (3).
328
329 ZSTD_NBTHREADS can be used to set the number of threads zstd will at‐
330 tempt to use during compression. If the value of ZSTD_NBTHREADS is not
331 a valid unsigned integer, it will be ignored with a warning message.
332 ZSTD_NBTHREADS has a default value of (1), and is capped at ZSTDMT_NB‐
333 WORKERS_MAX==200. zstd must be compiled with multithread support for
334 this to have any effect.
335
336 They can both be overridden by corresponding command line arguments: -#
337 for compression level and -T# for number of compression threads.
338
340 zstd offers dictionary compression, which greatly improves efficiency
341 on small files and messages. It´s possible to train zstd with a set of
342 samples, the result of which is saved into a file called a dictionary.
343 Then during compression and decompression, reference the same dictio‐
344 nary, using command -D dictionaryFileName. Compression of small files
345 similar to the sample set will be greatly improved.
346
347 --train FILEs
348 Use FILEs as training set to create a dictionary. The training
349 set should contain a lot of small files (> 100), and weight typ‐
350 ically 100x the target dictionary size (for example, 10 MB for a
351 100 KB dictionary). --train can be combined with -r to indicate
352 a directory rather than listing all the files, which can be use‐
353 ful to circumvent shell expansion limits.
354
355 --train supports multithreading if zstd is compiled with thread‐
356 ing support (default). Additional parameters can be specified
357 with --train-fastcover. The legacy dictionary builder can be ac‐
358 cessed with --train-legacy. The slower cover dictionary builder
359 can be accessed with --train-cover. Default is equivalent to
360 --train-fastcover=d=8,steps=4.
361
362 -o file
363 Dictionary saved into file (default name: dictionary).
364
365 --maxdict=#
366 Limit dictionary to specified size (default: 112640).
367
368 -# Use # compression level during training (optional). Will gener‐
369 ate statistics more tuned for selected compression level, re‐
370 sulting in a small compression ratio improvement for this level.
371
372 -B# Split input files into blocks of size # (default: no split)
373
374 -M#, --memory=#
375 Limit the amount of sample data loaded for training (default: 2
376 GB). See above for details.
377
378 --dictID=#
379 A dictionary ID is a locally unique ID that a decoder can use to
380 verify it is using the right dictionary. By default, zstd will
381 create a 4-bytes random number ID. It´s possible to give a pre‐
382 cise number instead. Short numbers have an advantage : an ID <
383 256 will only need 1 byte in the compressed frame header, and an
384 ID < 65536 will only need 2 bytes. This compares favorably to 4
385 bytes default. However, it´s up to the dictionary manager to not
386 assign twice the same ID to 2 different dictionaries.
387
388 --train-cover[=k#,d=#,steps=#,split=#,shrink[=#]]
389 Select parameters for the default dictionary builder algorithm
390 named cover. If d is not specified, then it tries d = 6 and d =
391 8. If k is not specified, then it tries steps values in the
392 range [50, 2000]. If steps is not specified, then the default
393 value of 40 is used. If split is not specified or split <= 0,
394 then the default value of 100 is used. Requires that d <= k. If
395 shrink flag is not used, then the default value for shrinkDict
396 of 0 is used. If shrink is not specified, then the default value
397 for shrinkDictMaxRegression of 1 is used.
398
399 Selects segments of size k with highest score to put in the dic‐
400 tionary. The score of a segment is computed by the sum of the
401 frequencies of all the subsegments of size d. Generally d should
402 be in the range [6, 8], occasionally up to 16, but the algorithm
403 will run faster with d <= 8. Good values for k vary widely based
404 on the input data, but a safe range is [2 * d, 2000]. If split
405 is 100, all input samples are used for both training and testing
406 to find optimal d and k to build dictionary. Supports multi‐
407 threading if zstd is compiled with threading support. Having
408 shrink enabled takes a truncated dictionary of minimum size and
409 doubles in size until compression ratio of the truncated dictio‐
410 nary is at most shrinkDictMaxRegression% worse than the compres‐
411 sion ratio of the largest dictionary.
412
413 Examples:
414
415 zstd --train-cover FILEs
416
417 zstd --train-cover=k=50,d=8 FILEs
418
419 zstd --train-cover=d=8,steps=500 FILEs
420
421 zstd --train-cover=k=50 FILEs
422
423 zstd --train-cover=k=50,split=60 FILEs
424
425 zstd --train-cover=shrink FILEs
426
427 zstd --train-cover=shrink=2 FILEs
428
429 --train-fastcover[=k#,d=#,f=#,steps=#,split=#,accel=#]
430 Same as cover but with extra parameters f and accel and differ‐
431 ent default value of split If split is not specified, then it
432 tries split = 75. If f is not specified, then it tries f = 20.
433 Requires that 0 < f < 32. If accel is not specified, then it
434 tries accel = 1. Requires that 0 < accel <= 10. Requires that d
435 = 6 or d = 8.
436
437 f is log of size of array that keeps track of frequency of sub‐
438 segments of size d. The subsegment is hashed to an index in the
439 range [0,2^f - 1]. It is possible that 2 different subsegments
440 are hashed to the same index, and they are considered as the
441 same subsegment when computing frequency. Using a higher f re‐
442 duces collision but takes longer.
443
444 Examples:
445
446 zstd --train-fastcover FILEs
447
448 zstd --train-fastcover=d=8,f=15,accel=2 FILEs
449
450 --train-legacy[=selectivity=#]
451 Use legacy dictionary builder algorithm with the given dictio‐
452 nary selectivity (default: 9). The smaller the selectivity
453 value, the denser the dictionary, improving its efficiency but
454 reducing its possible maximum size. --train-legacy=s=# is also
455 accepted.
456
457 Examples:
458
459 zstd --train-legacy FILEs
460
461 zstd --train-legacy=selectivity=8 FILEs
462
464 -b# benchmark file(s) using compression level #
465
466 -e# benchmark file(s) using multiple compression levels, from -b# to
467 -e# (inclusive)
468
469 -i# minimum evaluation time, in seconds (default: 3s), benchmark
470 mode only
471
472 -B#, --block-size=#
473 cut file(s) into independent blocks of size # (default: no
474 block)
475
476 --priority=rt
477 set process priority to real-time
478
479 Output Format: CompressionLevel#Filename : IntputSize -> OutputSize
480 (CompressionRatio), CompressionSpeed, DecompressionSpeed
481
482 Methodology: For both compression and decompression speed, the entire
483 input is compressed/decompressed in-memory to measure speed. A run
484 lasts at least 1 sec, so when files are small, they are compressed/de‐
485 compressed several times per run, in order to improve measurement accu‐
486 racy.
487
489 -B#:
490 Select the size of each compression job. This parameter is only avail‐
491 able when multi-threading is enabled. Each compression job is run in
492 parallel, so this value indirectly impacts the nb of active threads.
493 Default job size varies depending on compression level (generally 4 *
494 windowSize). -B# makes it possible to manually select a custom size.
495 Note that job size must respect a minimum value which is enforced
496 transparently. This minimum is either 512 KB, or overlapSize, whichever
497 is largest. Different job sizes will lead to (slightly) different com‐
498 pressed frames.
499
500 --zstd[=options]:
501 zstd provides 22 predefined compression levels. The selected or default
502 predefined compression level can be changed with advanced compression
503 options. The options are provided as a comma-separated list. You may
504 specify only the options you want to change and the rest will be taken
505 from the selected or default compression level. The list of available
506 options:
507
508 strategy=strat, strat=strat
509 Specify a strategy used by a match finder.
510
511 There are 9 strategies numbered from 1 to 9, from faster to
512 stronger: 1=ZSTD_fast, 2=ZSTD_dfast, 3=ZSTD_greedy, 4=ZSTD_lazy,
513 5=ZSTD_lazy2, 6=ZSTD_btlazy2, 7=ZSTD_btopt, 8=ZSTD_btultra,
514 9=ZSTD_btultra2.
515
516 windowLog=wlog, wlog=wlog
517 Specify the maximum number of bits for a match distance.
518
519 The higher number of increases the chance to find a match which
520 usually improves compression ratio. It also increases memory re‐
521 quirements for the compressor and decompressor. The minimum wlog
522 is 10 (1 KiB) and the maximum is 30 (1 GiB) on 32-bit platforms
523 and 31 (2 GiB) on 64-bit platforms.
524
525 Note: If windowLog is set to larger than 27, --long=windowLog or
526 --memory=windowSize needs to be passed to the decompressor.
527
528 hashLog=hlog, hlog=hlog
529 Specify the maximum number of bits for a hash table.
530
531 Bigger hash tables cause less collisions which usually makes
532 compression faster, but requires more memory during compression.
533
534 The minimum hlog is 6 (64 B) and the maximum is 30 (1 GiB).
535
536 chainLog=clog, clog=clog
537 Specify the maximum number of bits for a hash chain or a binary
538 tree.
539
540 Higher numbers of bits increases the chance to find a match
541 which usually improves compression ratio. It also slows down
542 compression speed and increases memory requirements for compres‐
543 sion. This option is ignored for the ZSTD_fast strategy.
544
545 The minimum clog is 6 (64 B) and the maximum is 29 (524 Mib) on
546 32-bit platforms and 30 (1 Gib) on 64-bit platforms.
547
548 searchLog=slog, slog=slog
549 Specify the maximum number of searches in a hash chain or a bi‐
550 nary tree using logarithmic scale.
551
552 More searches increases the chance to find a match which usually
553 increases compression ratio but decreases compression speed.
554
555 The minimum slog is 1 and the maximum is ´windowLog´ - 1.
556
557 minMatch=mml, mml=mml
558 Specify the minimum searched length of a match in a hash table.
559
560 Larger search lengths usually decrease compression ratio but im‐
561 prove decompression speed.
562
563 The minimum mml is 3 and the maximum is 7.
564
565 targetLength=tlen, tlen=tlen
566 The impact of this field vary depending on selected strategy.
567
568 For ZSTD_btopt, ZSTD_btultra and ZSTD_btultra2, it specifies the
569 minimum match length that causes match finder to stop searching.
570 A larger targetLength usually improves compression ratio but de‐
571 creases compression speed. t For ZSTD_fast, it triggers ul‐
572 tra-fast mode when > 0. The value represents the amount of data
573 skipped between match sampling. Impact is reversed : a larger
574 targetLength increases compression speed but decreases compres‐
575 sion ratio.
576
577 For all other strategies, this field has no impact.
578
579 The minimum tlen is 0 and the maximum is 128 Kib.
580
581 overlapLog=ovlog, ovlog=ovlog
582 Determine overlapSize, amount of data reloaded from previous
583 job. This parameter is only available when multithreading is en‐
584 abled. Reloading more data improves compression ratio, but de‐
585 creases speed.
586
587 The minimum ovlog is 0, and the maximum is 9. 1 means "no over‐
588 lap", hence completely independent jobs. 9 means "full overlap",
589 meaning up to windowSize is reloaded from previous job. Reducing
590 ovlog by 1 reduces the reloaded amount by a factor 2. For exam‐
591 ple, 8 means "windowSize/2", and 6 means "windowSize/8". Value 0
592 is special and means "default" : ovlog is automatically deter‐
593 mined by zstd. In which case, ovlog will range from 6 to 9, de‐
594 pending on selected strat.
595
596 ldmHashLog=lhlog, lhlog=lhlog
597 Specify the maximum size for a hash table used for long distance
598 matching.
599
600 This option is ignored unless long distance matching is enabled.
601
602 Bigger hash tables usually improve compression ratio at the ex‐
603 pense of more memory during compression and a decrease in com‐
604 pression speed.
605
606 The minimum lhlog is 6 and the maximum is 30 (default: 20).
607
608 ldmMinMatch=lmml, lmml=lmml
609 Specify the minimum searched length of a match for long distance
610 matching.
611
612 This option is ignored unless long distance matching is enabled.
613
614 Larger/very small values usually decrease compression ratio.
615
616 The minimum lmml is 4 and the maximum is 4096 (default: 64).
617
618 ldmBucketSizeLog=lblog, lblog=lblog
619 Specify the size of each bucket for the hash table used for long
620 distance matching.
621
622 This option is ignored unless long distance matching is enabled.
623
624 Larger bucket sizes improve collision resolution but decrease
625 compression speed.
626
627 The minimum lblog is 1 and the maximum is 8 (default: 3).
628
629 ldmHashRateLog=lhrlog, lhrlog=lhrlog
630 Specify the frequency of inserting entries into the long dis‐
631 tance matching hash table.
632
633 This option is ignored unless long distance matching is enabled.
634
635 Larger values will improve compression speed. Deviating far from
636 the default value will likely result in a decrease in compres‐
637 sion ratio.
638
639 The default value is wlog - lhlog.
640
641 Example
642 The following parameters sets advanced compression options to something
643 similar to predefined level 19 for files bigger than 256 KB:
644
645 --zstd=wlog=23,clog=23,hlog=22,slog=6,mml=3,tlen=48,strat=6
646
648 Report bugs at: https://github.com/facebook/zstd/issues
649
651 Yann Collet
652
653
654
655zstd 1.5.2 January 2022 ZSTD(1)