1VARNISHD(1) VARNISHD(1)
2
3
4
6 varnishd - HTTP accelerator daemon
7
9 varnishd [-a [name=][ad‐
10 dress][:port][,PROTO][,user=<user>][,group=<group>][,mode=<mode>]] [-b
11 [host[:port]|path]] [-C] [-d] [-F] [-f config] [-h type[,options]] [-I
12 clifile] [-i identity] [-j jail[,jailoptions]] [-l vsl] [-M ad‐
13 dress:port] [-n name] [-P file] [-p param=value] [-r param[,param...]]
14 [-S secret-file] [-s [name=]kind[,options]] [-T address[:port]] [-t
15 TTL] [-V] [-W waiter]
16
17 varnishd [-x parameter|vsl|cli|builtin|optstring]
18
19 varnishd [-?]
20
22 The varnishd daemon accepts HTTP requests from clients, passes them on
23 to a backend server and caches the returned documents to better satisfy
24 future requests for the same document.
25
27 Basic options
28 -a <[name=][ad‐
29 dress][:port][,PROTO][,user=<user>][,group=<group>][,mode=<mode>]>
30 Listen for client requests on the specified address and port. The
31 address can be a host name ("localhost"), an IPv4 dotted-quad
32 ("127.0.0.1"), an IPv6 address enclosed in square brackets
33 ("[::1]"), or a path beginning with a '/' for a Unix domain socket
34 ("/path/to/listen.sock"). If address is not specified, varnishd will
35 listen on all available IPv4 and IPv6 interfaces. If port is not
36 specified, port 80 (http) is used. At least one of address or port
37 is required.
38
39 If a Unix domain socket is specified as the listen address, then the
40 user, group and mode sub-arguments may be used to specify the per‐
41 missions of the socket file -- use names for user and group, and a
42 3-digit octal value for mode. These sub-arguments are not permitted
43 if an IP address is specified. When Unix domain socket listeners are
44 in use, all VCL configurations must have version >= 4.1.
45
46 Name is referenced in logs. If name is not specified, "a0", "a1",
47 etc. is used. An additional protocol type can be set for the listen‐
48 ing socket with PROTO. Valid protocol types are: HTTP (default), and
49 PROXY.
50
51 Multiple listening addresses can be specified by using different -a
52 arguments.
53
54 -b <[host[:port]|path]>
55 Use the specified host as backend server. If port is not speci‐
56 fied, the default is 8080.
57
58 If the value of -b begins with /, it is interpreted as the abso‐
59 lute path of a Unix domain socket to which Varnish connects. In
60 that case, the value of -b must satisfy the conditions required
61 for the .path field of a backend declaration, see vcl(7). Back‐
62 ends with Unix socket addresses may only be used with VCL ver‐
63 sions >= 4.1.
64
65 -b can be used only once, and not together with f.
66
67 -f config
68 Use the specified VCL configuration file instead of the builtin
69 default. See vcl(7) for details on VCL syntax.
70
71 If a single -f option is used, then the VCL instance loaded from
72 the file is named "boot" and immediately becomes active. If more
73 than one -f option is used, the VCL instances are named "boot0",
74 "boot1" and so forth, in the order corresponding to the -f argu‐
75 ments, and the last one is named "boot", which becomes active.
76
77 Either -b or one or more -f options must be specified, but not
78 both, and they cannot both be left out, unless -d is used to
79 start varnishd in debugging mode. If the empty string is speci‐
80 fied as the sole -f option, then varnishd starts without start‐
81 ing the worker process, and the management process will accept
82 CLI commands. You can also combine an empty -f option with an
83 initialization script (-I option) and the child process will be
84 started if there is an active VCL at the end of the initializa‐
85 tion.
86
87 When used with a relative file name, config is searched in the
88 vcl_path. It is possible to set this path prior to using -f op‐
89 tions with a -p option. During startup, varnishd doesn't com‐
90 plain about unsafe VCL paths: unlike the varnish-cli(7) that
91 could later be accessed remotely, starting varnishd requires lo‐
92 cal privileges.
93
94 -n name
95 Specify the name for this instance. This name is used to con‐
96 struct the name of the directory in which varnishd keeps tempo‐
97 rary files and persistent state. If the specified name begins
98 with a forward slash, it is interpreted as the absolute path to
99 the directory.
100
101 Documentation options
102 For these options, varnishd prints information to standard output and
103 exits. When a -x option is used, it must be the only option (it outputs
104 documentation in reStructuredText, aka RST).
105
106 -?
107 Print the usage message.
108
109 -x parameter
110 Print documentation of the runtime parameters (-p options), see
111 List of Parameters.
112
113 -x vsl Print documentation of the tags used in the Varnish shared mem‐
114 ory log, see vsl(7).
115
116 -x cli Print documentation of the command line interface, see var‐
117 nish-cli(7).
118
119 -x builtin
120 Print the contents of the default VCL program builtin.vcl.
121
122 -x optstring
123 Print the optstring parameter to getopt(3) to help writing wrap‐
124 per scripts.
125
126 Operations options
127 -F Do not fork, run in the foreground. Only one of -F or -d can be
128 specified, and -F cannot be used together with -C.
129
130 -T <address[:port]>
131 Offer a management interface on the specified address and port.
132 See varnish-cli(7) for documentation of the management commands.
133 To disable the management interface use none.
134
135 -M <address:port>
136 Connect to this port and offer the command line interface.
137 Think of it as a reverse shell. When running with -M and there
138 is no backend defined the child process (the cache) will not
139 start initially.
140
141 -P file
142 Write the PID of the process to the specified file.
143
144 -i identity
145 Specify the identity of the Varnish server. This can be accessed
146 using server.identity from VCL and with VSM_Name() from utili‐
147 ties. If not specified the output of gethostname(3) is used.
148
149 -I clifile
150 Execute the management commands in the file given as clifile be‐
151 fore the the worker process starts, see CLI Command File.
152
153 Tuning options
154 -t TTL Specifies the default time to live (TTL) for cached objects.
155 This is a shortcut for specifying the default_ttl run-time pa‐
156 rameter.
157
158 -p <param=value>
159 Set the parameter specified by param to the specified value, see
160 List of Parameters for details. This option can be used multiple
161 times to specify multiple parameters.
162
163 -s <[name=]type[,options]>
164 Use the specified storage backend. See Storage Backend section.
165
166 This option can be used multiple times to specify multiple stor‐
167 age files. Name is referenced in logs, VCL, statistics, etc. If
168 name is not specified, "s0", "s1" and so forth is used.
169
170 -l <vsl>
171 Specifies size of the space for the VSL records, shorthand for
172 -p vsl_space=<vsl>. Scaling suffixes like 'K' and 'M' can be
173 used up to (G)igabytes. See vsl_space for more information.
174
175 Security options
176 -r <param[,param...]>
177 Make the listed parameters read only. This gives the system ad‐
178 ministrator a way to limit what the Varnish CLI can do. Con‐
179 sider making parameters such as cc_command, vcc_allow_inline_c
180 and vmod_path read only as these can potentially be used to es‐
181 calate privileges from the CLI.
182
183 -S secret-file
184 Path to a file containing a secret used for authorizing access
185 to the management port. To disable authentication use none.
186
187 If this argument is not provided, a secret drawn from the system
188 PRNG will be written to a file called _.secret in the working
189 directory (see opt_n) with default ownership and permissions of
190 the user having started varnish.
191
192 Thus, users wishing to delegate control over varnish will proba‐
193 bly want to create a custom secret file with appropriate permis‐
194 sions (ie. readable by a unix group to delegate control to).
195
196 -j <jail[,jailoptions]>
197 Specify the jailing mechanism to use. See Jail section.
198
199 Advanced, development and debugging options
200 -d Enables debugging mode: The parent process runs in the fore‐
201 ground with a CLI connection on stdin/stdout, and the child
202 process must be started explicitly with a CLI command. Terminat‐
203 ing the parent process will also terminate the child.
204
205 Only one of -d or -F can be specified, and -d cannot be used to‐
206 gether with -C.
207
208 -C Print VCL code compiled to C language and exit. Specify the VCL
209 file to compile with the -f option. Either -f or -b must be used
210 with -C, and -C cannot be used with -F or -d.
211
212 -V Display the version number and exit. This must be the only op‐
213 tion.
214
215 -h <type[,options]>
216 Specifies the hash algorithm. See Hash Algorithm section for a
217 list of supported algorithms.
218
219 -W waiter
220 Specifies the waiter type to use.
221
222 Hash Algorithm
223 The following hash algorithms are available:
224
225 -h critbit
226 self-scaling tree structure. The default hash algorithm in Var‐
227 nish Cache 2.1 and onwards. In comparison to a more traditional
228 B tree the critbit tree is almost completely lockless. Do not
229 change this unless you are certain what you're doing.
230
231 -h simple_list
232 A simple doubly-linked list. Not recommended for production
233 use.
234
235 -h <classic[,buckets]>
236 A standard hash table. The hash key is the CRC32 of the object's
237 URL modulo the size of the hash table. Each table entry points
238 to a list of elements which share the same hash key. The buckets
239 parameter specifies the number of entries in the hash table.
240 The default is 16383.
241
242 Storage Backend
243 The argument format to define storage backends is:
244
245 -s <[name]=kind[,options]>
246 If name is omitted, Varnish will name storages sN, starting with
247 s0 and incrementing N for every new storage.
248
249 For kind and options see details below.
250
251 Storages can be used in vcl as storage.name, so, for example if myStor‐
252 age was defined by -s myStorage=malloc,5G, it could be used in VCL like
253 so:
254
255 set beresp.storage = storage.myStorage;
256
257 A special name is Transient which is the default storage for un‐
258 cacheable objects as resulting from a pass, hit-for-miss or
259 hit-for-pass.
260
261 If no -s options are given, the default is:
262
263 -s malloc=100m
264
265 If no Transient storage is defined, the default is an unbound malloc
266 storage as if defined as:
267
268 -s Transient=malloc
269
270 The following storage types and options are available:
271
272 -s <default[,size]>
273 The default storage type resolves to umem where available and
274 malloc otherwise.
275
276 -s <malloc[,size]>
277 malloc is a memory based backend.
278
279 -s <umem[,size]>
280 umem is a storage backend which is more efficient than malloc on
281 platforms where it is available.
282
283 See the section on umem in chapter Storage backends of The Var‐
284 nish Users Guide for details.
285
286 -s <file,path[,size[,granularity[,advice]]]>
287 The file backend stores data in a file on disk. The file will be
288 accessed using mmap. Note that this storage provide no cache
289 persistence.
290
291 The path is mandatory. If path points to a directory, a tempo‐
292 rary file will be created in that directory and immediately un‐
293 linked. If path points to a non-existing file, the file will be
294 created.
295
296 If size is omitted, and path points to an existing file with a
297 size greater than zero, the size of that file will be used. If
298 not, an error is reported.
299
300 Granularity sets the allocation block size. Defaults to the sys‐
301 tem page size or the filesystem block size, whichever is larger.
302
303 Advice tells the kernel how varnishd expects to use this mapped
304 region so that the kernel can choose the appropriate read-ahead
305 and caching techniques. Possible values are normal, random and
306 sequential, corresponding to MADV_NORMAL, MADV_RANDOM and
307 MADV_SEQUENTIAL madvise() advice argument, respectively. De‐
308 faults to random.
309
310 -s <persistent,path,size>
311 Persistent storage. Varnish will store objects in a file in a
312 manner that will secure the survival of most of the objects in
313 the event of a planned or unplanned shutdown of Varnish. The
314 persistent storage backend has multiple issues with it and will
315 likely be removed from a future version of Varnish.
316
317 Jail
318 Varnish jails are a generalization over various platform specific meth‐
319 ods to reduce the privileges of varnish processes. They may have spe‐
320 cific options. Available jails are:
321
322 -j <solaris[,worker=`privspec`]>
323 Reduce privileges(5) for varnishd and sub-process to the mini‐
324 mally required set. Only available on platforms which have the
325 setppriv(2) call.
326
327 The optional worker argument can be used to pass a privi‐
328 lege-specification (see ppriv(1)) by which to extend the effec‐
329 tive set of the varnish worker process. While extended privi‐
330 leges may be required by custom vmods, it is always the more se‐
331 cure to not use the worker option.
332
333 Example to grant basic privileges to the worker process:
334
335 -j solaris,worker=basic
336
337 -j <unix[,user=`user`][,ccgroup=`group`][,workuser=`user`]>
338 Default on all other platforms when varnishd is started with an
339 effective uid of 0 ("as root").
340
341 With the unix jail mechanism activated, varnish will switch to
342 an alternative user for subprocesses and change the effective
343 uid of the master process whenever possible.
344
345 The optional user argument specifies which alternative user to
346 use. It defaults to varnish.
347
348 The optional ccgroup argument specifies a group to add to var‐
349 nish subprocesses requiring access to a c-compiler. There is no
350 default.
351
352 The optional workuser argument specifies an alternative user to
353 use for the worker process. It defaults to vcache.
354
355 -j none
356 last resort jail choice: With jail mechanism none, varnish will
357 run all processes with the privileges it was started with.
358
359 Management Interface
360 If the -T option was specified, varnishd will offer a command-line man‐
361 agement interface on the specified address and port. The recommended
362 way of connecting to the command-line management interface is through
363 varnishadm(1).
364
365 The commands available are documented in varnish-cli(7).
366
367 CLI Command File
368 The -I option makes it possible to run arbitrary management commands
369 when varnishd is launched, before the worker process is started. In
370 particular, this is the way to load configurations, apply labels to
371 them, and make a VCL instance active that uses those labels on startup:
372
373 vcl.load panic /etc/varnish_panic.vcl
374 vcl.load siteA0 /etc/varnish_siteA.vcl
375 vcl.load siteB0 /etc/varnish_siteB.vcl
376 vcl.load siteC0 /etc/varnish_siteC.vcl
377 vcl.label siteA siteA0
378 vcl.label siteB siteB0
379 vcl.label siteC siteC0
380 vcl.load main /etc/varnish_main.vcl
381 vcl.use main
382
383 Every line in the file, including the last line, must be terminated by
384 a newline or carriage return.
385
386 If a command in the file is prefixed with '-', failure will not abort
387 the startup.
388
390 Run Time Parameter Flags
391 Runtime parameters are marked with shorthand flags to avoid repeating
392 the same text over and over in the table below. The meaning of the
393 flags are:
394
395 • experimental
396
397 We have no solid information about good/bad/optimal values for this
398 parameter. Feedback with experience and observations are most wel‐
399 come.
400
401 • delayed
402
403 This parameter can be changed on the fly, but will not take effect
404 immediately.
405
406 • restart
407
408 The worker process must be stopped and restarted, before this parame‐
409 ter takes effect.
410
411 • reload
412
413 The VCL programs must be reloaded for this parameter to take effect.
414
415 • wizard
416
417 Do not touch unless you really know what you're doing.
418
419 • only_root
420
421 Only works if varnishd is running as root.
422
423 Default Value Exceptions on 32 bit Systems
424 Be aware that on 32 bit systems, certain default or maximum values are
425 reduced relative to the values listed below, in order to conserve VM
426 space:
427
428 • workspace_client: 24k
429
430 • workspace_backend: 20k
431
432 • http_resp_size: 8k
433
434 • http_req_size: 12k
435
436 • gzip_buffer: 4k
437
438 • vsl_space: 1G (maximum)
439
440 • thread_pool_stack: 52k
441
442 List of Parameters
443 This text is produced from the same text you will find in the CLI if
444 you use the param.show command:
445
446 accept_filter
447 NB: This parameter depends on a feature which is not available on all
448 platforms.
449
450 • Units: bool
451
452 • Default: on (if your platform supports accept filters)
453
454 Enable kernel accept-filters. This may require a kernel module to be
455 loaded to have an effect when enabled.
456
457 Enabling accept_filter may prevent some requests to reach Varnish in
458 the first place. Malformed requests may go unnoticed and not increase
459 the client_req_400 counter. GET or HEAD requests with a body may be
460 blocked altogether.
461
462 acceptor_sleep_decay
463 • Default: 0.9
464
465 • Minimum: 0
466
467 • Maximum: 1
468
469 • Flags: experimental
470
471 If we run out of resources, such as file descriptors or worker threads,
472 the acceptor will sleep between accepts. This parameter (multiplica‐
473 tively) reduce the sleep duration for each successful accept. (ie: 0.9
474 = reduce by 10%)
475
476 acceptor_sleep_incr
477 • Units: seconds
478
479 • Default: 0.000
480
481 • Minimum: 0.000
482
483 • Maximum: 1.000
484
485 • Flags: experimental
486
487 If we run out of resources, such as file descriptors or worker threads,
488 the acceptor will sleep between accepts. This parameter control how
489 much longer we sleep, each time we fail to accept a new connection.
490
491 acceptor_sleep_max
492 • Units: seconds
493
494 • Default: 0.050
495
496 • Minimum: 0.000
497
498 • Maximum: 10.000
499
500 • Flags: experimental
501
502 If we run out of resources, such as file descriptors or worker threads,
503 the acceptor will sleep between accepts. This parameter limits how
504 long it can sleep between attempts to accept new connections.
505
506 auto_restart
507 • Units: bool
508
509 • Default: on
510
511 Automatically restart the child/worker process if it dies.
512
513 backend_idle_timeout
514 • Units: seconds
515
516 • Default: 60.000
517
518 • Minimum: 1.000
519
520 Timeout before we close unused backend connections.
521
522 backend_local_error_holddown
523 • Units: seconds
524
525 • Default: 10.000
526
527 • Minimum: 0.000
528
529 • Flags: experimental
530
531 When connecting to backends, certain error codes (EADDRNOTAVAIL, EAC‐
532 CESS, EPERM) signal a local resource shortage or configuration issue
533 for which retrying connection attempts may worsen the situation due to
534 the complexity of the operations involved in the kernel. This parame‐
535 ter prevents repeated connection attempts for the configured duration.
536
537 backend_remote_error_holddown
538 • Units: seconds
539
540 • Default: 0.250
541
542 • Minimum: 0.000
543
544 • Flags: experimental
545
546 When connecting to backends, certain error codes (ECONNREFUSED, ENETUN‐
547 REACH) signal fundamental connection issues such as the backend not ac‐
548 cepting connections or routing problems for which repeated connection
549 attempts are considered useless This parameter prevents repeated con‐
550 nection attempts for the configured duration.
551
552 ban_cutoff
553 • Units: bans
554
555 • Default: 0
556
557 • Minimum: 0
558
559 • Flags: experimental
560
561 Expurge long tail content from the cache to keep the number of bans be‐
562 low this value. 0 disables.
563
564 When this parameter is set to a non-zero value, the ban lurker contin‐
565 ues to work the ban list as usual top to bottom, but when it reaches
566 the ban_cutoff-th ban, it treats all objects as if they matched a ban
567 and expurges them from cache. As actively used objects get tested
568 against the ban list at request time and thus are likely to be associ‐
569 ated with bans near the top of the ban list, with ban_cutoff, least re‐
570 cently accessed objects (the "long tail") are removed.
571
572 This parameter is a safety net to avoid bad response times due to bans
573 being tested at lookup time. Setting a cutoff trades response time for
574 cache efficiency. The recommended value is proportional to
575 rate(bans_lurker_tests_tested) / n_objects while the ban lurker is
576 working, which is the number of bans the system can sustain. The addi‐
577 tional latency due to request ban testing is in the order of ban_cutoff
578 / rate(bans_lurker_tests_tested). For example, for
579 rate(bans_lurker_tests_tested) = 2M/s and a tolerable latency of 100ms,
580 a good value for ban_cutoff may be 200K.
581
582 ban_dups
583 • Units: bool
584
585 • Default: on
586
587 Eliminate older identical bans when a new ban is added. This saves CPU
588 cycles by not comparing objects to identical bans. This is a waste of
589 time if you have many bans which are never identical.
590
591 ban_lurker_age
592 • Units: seconds
593
594 • Default: 60.000
595
596 • Minimum: 0.000
597
598 The ban lurker will ignore bans until they are this old. When a ban is
599 added, the active traffic will be tested against it as part of object
600 lookup. Because many applications issue bans in bursts, this parameter
601 holds the ban-lurker off until the rush is over. This should be set to
602 the approximate time which a ban-burst takes.
603
604 ban_lurker_batch
605 • Default: 1000
606
607 • Minimum: 1
608
609 The ban lurker sleeps ${ban_lurker_sleep} after examining this many ob‐
610 jects. Use this to pace the ban-lurker if it eats too many resources.
611
612 ban_lurker_holdoff
613 • Units: seconds
614
615 • Default: 0.010
616
617 • Minimum: 0.000
618
619 • Flags: experimental
620
621 How long the ban lurker sleeps when giving way to lookup due to lock
622 contention.
623
624 ban_lurker_sleep
625 • Units: seconds
626
627 • Default: 0.010
628
629 • Minimum: 0.000
630
631 How long the ban lurker sleeps after examining ${ban_lurker_batch} ob‐
632 jects. Use this to pace the ban-lurker if it eats too many resources.
633 A value of zero will disable the ban lurker entirely.
634
635 between_bytes_timeout
636 • Units: seconds
637
638 • Default: 60.000
639
640 • Minimum: 0.000
641
642 We only wait for this many seconds between bytes received from the
643 backend before giving up the fetch. VCL values, per backend or per
644 backend request take precedence. This parameter does not apply to
645 pipe'ed requests.
646
647 cc_command
648 • Default: defined when Varnish is built
649
650 • Flags: must_reload
651
652 Command used for compiling the C source code to a dlopen(3) loadable
653 object. Any occurrence of %s in the string will be replaced with the
654 source file name, and %o will be replaced with the output file name.
655
656 cli_limit
657 • Units: bytes
658
659 • Default: 48k
660
661 • Minimum: 128b
662
663 • Maximum: 99999999b
664
665 Maximum size of CLI response. If the response exceeds this limit, the
666 response code will be 201 instead of 200 and the last line will indi‐
667 cate the truncation.
668
669 cli_timeout
670 • Units: seconds
671
672 • Default: 60.000
673
674 • Minimum: 0.000
675
676 Timeout for the childs replies to CLI requests from the mgt_param.
677
678 clock_skew
679 • Units: seconds
680
681 • Default: 10
682
683 • Minimum: 0
684
685 How much clockskew we are willing to accept between the backend and our
686 own clock.
687
688 clock_step
689 • Units: seconds
690
691 • Default: 1.000
692
693 • Minimum: 0.000
694
695 How much observed clock step we are willing to accept before we panic.
696
697 connect_timeout
698 • Units: seconds
699
700 • Default: 3.500
701
702 • Minimum: 0.000
703
704 Default connection timeout for backend connections. We only try to con‐
705 nect to the backend for this many seconds before giving up. VCL can
706 override this default value for each backend and backend request.
707
708 critbit_cooloff
709 • Units: seconds
710
711 • Default: 180.000
712
713 • Minimum: 60.000
714
715 • Maximum: 254.000
716
717 • Flags: wizard
718
719 How long the critbit hasher keeps deleted objheads on the cooloff list.
720
721 debug
722 • Default: none
723
724 Enable/Disable various kinds of debugging.
725
726 none Disable all debugging
727
728 Use +/- prefix to set/reset individual bits:
729
730 req_state
731 VSL Request state engine
732
733 workspace
734 VSL Workspace operations
735
736 waitinglist
737 VSL Waitinglist events
738
739 syncvsl
740 Make VSL synchronous
741
742 hashedge
743 Edge cases in Hash
744
745 vclrel Rapid VCL release
746
747 lurker VSL Ban lurker
748
749 esi_chop
750 Chop ESI fetch to bits
751
752 flush_head
753 Flush after http1 head
754
755 vtc_mode
756 Varnishtest Mode
757
758 witness
759 Emit WITNESS lock records
760
761 vsm_keep
762 Keep the VSM file on restart
763
764 drop_pools
765 Drop thread pools (testing)
766
767 slow_acceptor
768 Slow down Acceptor
769
770 h2_nocheck
771 Disable various H2 checks
772
773 vmod_so_keep
774 Keep copied VMOD libraries
775
776 processors
777 Fetch/Deliver processors
778
779 protocol
780 Protocol debugging
781
782 vcl_keep
783 Keep VCL C and so files
784
785 lck Additional lock statistics
786
787 default_grace
788 • Units: seconds
789
790 • Default: 10.000
791
792 • Minimum: 0.000
793
794 • Flags: obj_sticky
795
796 Default grace period. We will deliver an object this long after it has
797 expired, provided another thread is attempting to get a new copy.
798
799 default_keep
800 • Units: seconds
801
802 • Default: 0.000
803
804 • Minimum: 0.000
805
806 • Flags: obj_sticky
807
808 Default keep period. We will keep a useless object around this long,
809 making it available for conditional backend fetches. That means that
810 the object will be removed from the cache at the end of ttl+grace+keep.
811
812 default_ttl
813 • Units: seconds
814
815 • Default: 120.000
816
817 • Minimum: 0.000
818
819 • Flags: obj_sticky
820
821 The TTL assigned to objects if neither the backend nor the VCL code as‐
822 signs one.
823
824 feature
825 • Default: none
826
827 Enable/Disable various minor features.
828
829 none Disable all features.
830
831 Use +/- prefix to enable/disable individual feature:
832
833 http2 Enable HTTP/2 protocol support.
834
835 short_panic
836 Short panic message.
837
838 no_coredump
839 No coredumps. Must be set before child process starts.
840
841 https_scheme
842 Extract host from full URI in the HTTP/1 request line, if the
843 scheme is https.
844
845 http_date_postel
846 Tolerate non compliant timestamp headers like Date, Last-Mod‐
847 ified, Expires etc.
848
849 esi_ignore_https
850 Convert <esi:include src"https://... to http://...
851
852 esi_disable_xml_check
853 Allow ESI processing on non-XML ESI bodies
854
855 esi_ignore_other_elements
856 Ignore XML syntax errors in ESI bodies.
857
858 esi_remove_bom
859 Ignore UTF-8 BOM in ESI bodies.
860
861 wait_silo
862 Wait for persistent silos to completely load before serving
863 requests.
864
865 fetch_chunksize
866 • Units: bytes
867
868 • Default: 16k
869
870 • Minimum: 4k
871
872 • Flags: experimental
873
874 The default chunksize used by fetcher. This should be bigger than the
875 majority of objects with short TTLs. Internal limits in the stor‐
876 age_file module makes increases above 128kb a dubious idea.
877
878 fetch_maxchunksize
879 • Units: bytes
880
881 • Default: 0.25G
882
883 • Minimum: 64k
884
885 • Flags: experimental
886
887 The maximum chunksize we attempt to allocate from storage. Making this
888 too large may cause delays and storage fragmentation.
889
890 first_byte_timeout
891 • Units: seconds
892
893 • Default: 60.000
894
895 • Minimum: 0.000
896
897 Default timeout for receiving first byte from backend. We only wait for
898 this many seconds for the first byte before giving up. VCL can over‐
899 ride this default value for each backend and backend request. This pa‐
900 rameter does not apply to pipe'ed requests.
901
902 gzip_buffer
903 • Units: bytes
904
905 • Default: 32k
906
907 • Minimum: 2k
908
909 • Flags: experimental
910
911 Size of malloc buffer used for gzip processing. These buffers are used
912 for in-transit data, for instance gunzip'ed data being sent to a
913 client.Making this space to small results in more overhead, writes to
914 sockets etc, making it too big is probably just a waste of memory.
915
916 gzip_level
917 • Default: 6
918
919 • Minimum: 0
920
921 • Maximum: 9
922
923 Gzip compression level: 0=debug, 1=fast, 9=best
924
925 gzip_memlevel
926 • Default: 8
927
928 • Minimum: 1
929
930 • Maximum: 9
931
932 Gzip memory level 1=slow/least, 9=fast/most compression. Memory impact
933 is 1=1k, 2=2k, ... 9=256k.
934
935 h2_header_table_size
936 • Units: bytes
937
938 • Default: 4k
939
940 • Minimum: 0b
941
942 HTTP2 header table size. This is the size that will be used for the
943 HPACK dynamic decoding table.
944
945 h2_initial_window_size
946 • Units: bytes
947
948 • Default: 65535b
949
950 • Minimum: 0b
951
952 • Maximum: 2147483647b
953
954 HTTP2 initial flow control window size.
955
956 h2_max_concurrent_streams
957 • Units: streams
958
959 • Default: 100
960
961 • Minimum: 0
962
963 HTTP2 Maximum number of concurrent streams. This is the number of re‐
964 quests that can be active at the same time for a single HTTP2 connec‐
965 tion.
966
967 h2_max_frame_size
968 • Units: bytes
969
970 • Default: 16k
971
972 • Minimum: 16k
973
974 • Maximum: 16777215b
975
976 HTTP2 maximum per frame payload size we are willing to accept.
977
978 h2_max_header_list_size
979 • Units: bytes
980
981 • Default: 2147483647b
982
983 • Minimum: 0b
984
985 HTTP2 maximum size of an uncompressed header list.
986
987 h2_rx_window_increment
988 • Units: bytes
989
990 • Default: 1M
991
992 • Minimum: 1M
993
994 • Maximum: 1G
995
996 • Flags: wizard
997
998 HTTP2 Receive Window Increments. How big credits we send in WINDOW_UP‐
999 DATE frames Only affects incoming request bodies (ie: POST, PUT etc.)
1000
1001 h2_rx_window_low_water
1002 • Units: bytes
1003
1004 • Default: 10M
1005
1006 • Minimum: 65535b
1007
1008 • Maximum: 1G
1009
1010 • Flags: wizard
1011
1012 HTTP2 Receive Window low water mark. We try to keep the window at
1013 least this big Only affects incoming request bodies (ie: POST, PUT
1014 etc.)
1015
1016 http1_iovs
1017 • Units: struct iovec (=16 bytes)
1018
1019 • Default: 64
1020
1021 • Minimum: 5
1022
1023 • Maximum: 1024
1024
1025 • Flags: wizard
1026
1027 Number of io vectors to allocate for HTTP1 protocol transmission. A
1028 HTTP1 header needs 7 + 2 per HTTP header field. Allocated from
1029 workspace_thread.
1030
1031 http_gzip_support
1032 • Units: bool
1033
1034 • Default: on
1035
1036 Enable gzip support. When enabled Varnish request compressed objects
1037 from the backend and store them compressed. If a client does not sup‐
1038 port gzip encoding Varnish will uncompress compressed objects on de‐
1039 mand. Varnish will also rewrite the Accept-Encoding header of clients
1040 indicating support for gzip to:
1041 Accept-Encoding: gzip
1042
1043 Clients that do not support gzip will have their Accept-Encoding header
1044 removed. For more information on how gzip is implemented please see the
1045 chapter on gzip in the Varnish reference.
1046
1047 When gzip support is disabled the variables beresp.do_gzip and
1048 beresp.do_gunzip have no effect in VCL.
1049
1050 http_max_hdr
1051 • Units: header lines
1052
1053 • Default: 64
1054
1055 • Minimum: 32
1056
1057 • Maximum: 65535
1058
1059 Maximum number of HTTP header lines we allow in
1060 {req|resp|bereq|beresp}.http (obj.http is autosized to the exact number
1061 of headers). Cheap, ~20 bytes, in terms of workspace memory. Note
1062 that the first line occupies five header lines.
1063
1064 http_range_support
1065 • Units: bool
1066
1067 • Default: on
1068
1069 Enable support for HTTP Range headers.
1070
1071 http_req_hdr_len
1072 • Units: bytes
1073
1074 • Default: 8k
1075
1076 • Minimum: 40b
1077
1078 Maximum length of any HTTP client request header we will allow. The
1079 limit is inclusive its continuation lines.
1080
1081 http_req_size
1082 • Units: bytes
1083
1084 • Default: 32k
1085
1086 • Minimum: 0.25k
1087
1088 Maximum number of bytes of HTTP client request we will deal with. This
1089 is a limit on all bytes up to the double blank line which ends the HTTP
1090 request. The memory for the request is allocated from the client
1091 workspace (param: workspace_client) and this parameter limits how much
1092 of that the request is allowed to take up.
1093
1094 http_resp_hdr_len
1095 • Units: bytes
1096
1097 • Default: 8k
1098
1099 • Minimum: 40b
1100
1101 Maximum length of any HTTP backend response header we will allow. The
1102 limit is inclusive its continuation lines.
1103
1104 http_resp_size
1105 • Units: bytes
1106
1107 • Default: 32k
1108
1109 • Minimum: 0.25k
1110
1111 Maximum number of bytes of HTTP backend response we will deal with.
1112 This is a limit on all bytes up to the double blank line which ends the
1113 HTTP response. The memory for the response is allocated from the back‐
1114 end workspace (param: workspace_backend) and this parameter limits how
1115 much of that the response is allowed to take up.
1116
1117 idle_send_timeout
1118 • Units: seconds
1119
1120 • Default: 60.000
1121
1122 • Minimum: 0.000
1123
1124 • Flags: delayed
1125
1126 Send timeout for individual pieces of data on client connections. May
1127 get extended if 'send_timeout' applies.
1128
1129 When this timeout is hit, the session is closed.
1130
1131 See the man page for setsockopt(2) or socket(7) under SO_SNDTIMEO for
1132 more information.
1133
1134 listen_depth
1135 • Units: connections
1136
1137 • Default: 1024
1138
1139 • Minimum: 0
1140
1141 • Flags: must_restart
1142
1143 Listen queue depth.
1144
1145 lru_interval
1146 • Units: seconds
1147
1148 • Default: 2.000
1149
1150 • Minimum: 0.000
1151
1152 • Flags: experimental
1153
1154 Grace period before object moves on LRU list. Objects are only moved
1155 to the front of the LRU list if they have not been moved there already
1156 inside this timeout period. This reduces the amount of lock operations
1157 necessary for LRU list access.
1158
1159 max_esi_depth
1160 • Units: levels
1161
1162 • Default: 5
1163
1164 • Minimum: 0
1165
1166 Maximum depth of esi:include processing.
1167
1168 max_restarts
1169 • Units: restarts
1170
1171 • Default: 4
1172
1173 • Minimum: 0
1174
1175 Upper limit on how many times a request can restart.
1176
1177 max_retries
1178 • Units: retries
1179
1180 • Default: 4
1181
1182 • Minimum: 0
1183
1184 Upper limit on how many times a backend fetch can retry.
1185
1186 max_vcl
1187 • Default: 100
1188
1189 • Minimum: 0
1190
1191 Threshold of loaded VCL programs. (VCL labels are not counted.) Pa‐
1192 rameter max_vcl_handling determines behaviour.
1193
1194 max_vcl_handling
1195 • Default: 1
1196
1197 • Minimum: 0
1198
1199 • Maximum: 2
1200
1201 Behaviour when attempting to exceed max_vcl loaded VCL.
1202
1203 • 0 - Ignore max_vcl parameter.
1204
1205 • 1 - Issue warning.
1206
1207 • 2 - Refuse loading VCLs.
1208
1209 nuke_limit
1210 • Units: allocations
1211
1212 • Default: 50
1213
1214 • Minimum: 0
1215
1216 • Flags: experimental
1217
1218 Maximum number of objects we attempt to nuke in order to make space for
1219 a object body.
1220
1221 pcre_match_limit
1222 • Default: 10000
1223
1224 • Minimum: 1
1225
1226 The limit for the number of calls to the internal match() function in
1227 pcre_exec().
1228
1229 (See: PCRE_EXTRA_MATCH_LIMIT in pcre docs.)
1230
1231 This parameter limits how much CPU time regular expression matching can
1232 soak up.
1233
1234 pcre_match_limit_recursion
1235 • Default: 20
1236
1237 • Minimum: 1
1238
1239 The recursion depth-limit for the internal match() function in a
1240 pcre_exec().
1241
1242 (See: PCRE_EXTRA_MATCH_LIMIT_RECURSION in pcre docs.)
1243
1244 This puts an upper limit on the amount of stack used by PCRE for cer‐
1245 tain classes of regular expressions.
1246
1247 We have set the default value low in order to prevent crashes, at the
1248 cost of possible regexp matching failures.
1249
1250 Matching failures will show up in the log as VCL_Error messages with
1251 regexp errors -27 or -21.
1252
1253 Testcase r01576 can be useful when tuning this parameter.
1254
1255 ping_interval
1256 • Units: seconds
1257
1258 • Default: 3
1259
1260 • Minimum: 0
1261
1262 • Flags: must_restart
1263
1264 Interval between pings from parent to child. Zero will disable pinging
1265 entirely, which makes it possible to attach a debugger to the child.
1266
1267 pipe_sess_max
1268 • Units: connections
1269
1270 • Default: 0
1271
1272 • Minimum: 0
1273
1274 Maximum number of sessions dedicated to pipe transactions.
1275
1276 pipe_timeout
1277 • Units: seconds
1278
1279 • Default: 60.000
1280
1281 • Minimum: 0.000
1282
1283 Idle timeout for PIPE sessions. If nothing have been received in either
1284 direction for this many seconds, the session is closed.
1285
1286 pool_req
1287 • Default: 10,100,10
1288
1289 Parameters for per worker pool request memory pool.
1290
1291 The three numbers are:
1292
1293 min_pool
1294 minimum size of free pool.
1295
1296 max_pool
1297 maximum size of free pool.
1298
1299 max_age
1300 max age of free element.
1301
1302 pool_sess
1303 • Default: 10,100,10
1304
1305 Parameters for per worker pool session memory pool.
1306
1307 The three numbers are:
1308
1309 min_pool
1310 minimum size of free pool.
1311
1312 max_pool
1313 maximum size of free pool.
1314
1315 max_age
1316 max age of free element.
1317
1318 pool_vbo
1319 • Default: 10,100,10
1320
1321 Parameters for backend object fetch memory pool.
1322
1323 The three numbers are:
1324
1325 min_pool
1326 minimum size of free pool.
1327
1328 max_pool
1329 maximum size of free pool.
1330
1331 max_age
1332 max age of free element.
1333
1334 prefer_ipv6
1335 • Units: bool
1336
1337 • Default: off
1338
1339 Prefer IPv6 address when connecting to backends which have both IPv4
1340 and IPv6 addresses.
1341
1342 rush_exponent
1343 • Units: requests per request
1344
1345 • Default: 3
1346
1347 • Minimum: 2
1348
1349 • Flags: experimental
1350
1351 How many parked request we start for each completed request on the ob‐
1352 ject. NB: Even with the implict delay of delivery, this parameter con‐
1353 trols an exponential increase in number of worker threads.
1354
1355 send_timeout
1356 • Units: seconds
1357
1358 • Default: 600.000
1359
1360 • Minimum: 0.000
1361
1362 • Flags: delayed
1363
1364 Total timeout for ordinary HTTP1 responses. Does not apply to some in‐
1365 ternally generated errors and pipe mode.
1366
1367 When 'idle_send_timeout' is hit while sending an HTTP1 response, the
1368 timeout is extended unless the total time already taken for sending the
1369 response in its entirety exceeds this many seconds.
1370
1371 When this timeout is hit, the session is closed
1372
1373 shortlived
1374 • Units: seconds
1375
1376 • Default: 10.000
1377
1378 • Minimum: 0.000
1379
1380 Objects created with (ttl+grace+keep) shorter than this are always put
1381 in transient storage.
1382
1383 sigsegv_handler
1384 • Units: bool
1385
1386 • Default: on
1387
1388 • Flags: must_restart
1389
1390 Install a signal handler which tries to dump debug information on seg‐
1391 mentation faults, bus errors and abort signals.
1392
1393 syslog_cli_traffic
1394 • Units: bool
1395
1396 • Default: on
1397
1398 Log all CLI traffic to syslog(LOG_INFO).
1399
1400 tcp_fastopen
1401 • Units: bool
1402
1403 • Default: off
1404
1405 • Flags: must_restart
1406
1407 Enable TCP Fast Open extension.
1408
1409 tcp_keepalive_intvl
1410 • Units: seconds
1411
1412 • Default: platform dependent
1413
1414 • Minimum: 1.000
1415
1416 • Maximum: 100.000
1417
1418 • Flags: experimental
1419
1420 The number of seconds between TCP keep-alive probes. Ignored for Unix
1421 domain sockets.
1422
1423 tcp_keepalive_probes
1424 • Units: probes
1425
1426 • Default: platform dependent
1427
1428 • Minimum: 1
1429
1430 • Maximum: 100
1431
1432 • Flags: experimental
1433
1434 The maximum number of TCP keep-alive probes to send before giving up
1435 and killing the connection if no response is obtained from the other
1436 end. Ignored for Unix domain sockets.
1437
1438 tcp_keepalive_time
1439 • Units: seconds
1440
1441 • Default: platform dependent
1442
1443 • Minimum: 1.000
1444
1445 • Maximum: 7200.000
1446
1447 • Flags: experimental
1448
1449 The number of seconds a connection needs to be idle before TCP begins
1450 sending out keep-alive probes. Ignored for Unix domain sockets.
1451
1452 thread_pool_add_delay
1453 • Units: seconds
1454
1455 • Default: 0.000
1456
1457 • Minimum: 0.000
1458
1459 • Flags: experimental
1460
1461 Wait at least this long after creating a thread.
1462
1463 Some (buggy) systems may need a short (sub-second) delay between creat‐
1464 ing threads. Set this to a few milliseconds if you see the
1465 'threads_failed' counter grow too much.
1466
1467 Setting this too high results in insufficient worker threads.
1468
1469 thread_pool_destroy_delay
1470 • Units: seconds
1471
1472 • Default: 1.000
1473
1474 • Minimum: 0.010
1475
1476 • Flags: delayed, experimental
1477
1478 Wait this long after destroying a thread.
1479
1480 This controls the decay of thread pools when idle(-ish).
1481
1482 thread_pool_fail_delay
1483 • Units: seconds
1484
1485 • Default: 0.200
1486
1487 • Minimum: 0.010
1488
1489 • Flags: experimental
1490
1491 Wait at least this long after a failed thread creation before trying to
1492 create another thread.
1493
1494 Failure to create a worker thread is often a sign that the end is
1495 near, because the process is running out of some resource. This delay
1496 tries to not rush the end on needlessly.
1497
1498 If thread creation failures are a problem, check that thread_pool_max
1499 is not too high.
1500
1501 It may also help to increase thread_pool_timeout and thread_pool_min,
1502 to reduce the rate at which treads are destroyed and later recreated.
1503
1504 thread_pool_max
1505 • Units: threads
1506
1507 • Default: 5000
1508
1509 • Minimum: thread_pool_min
1510
1511 • Flags: delayed
1512
1513 The maximum number of worker threads in each pool.
1514
1515 Do not set this higher than you have to, since excess worker threads
1516 soak up RAM and CPU and generally just get in the way of getting work
1517 done.
1518
1519 thread_pool_min
1520 • Units: threads
1521
1522 • Default: 100
1523
1524 • Minimum: 5
1525
1526 • Maximum: thread_pool_max
1527
1528 • Flags: delayed
1529
1530 The minimum number of worker threads in each pool.
1531
1532 Increasing this may help ramp up faster from low load situations or
1533 when threads have expired.
1534
1535 Technical minimum is 5 threads, but this parameter is strongly recom‐
1536 mended to be at least 10
1537
1538 thread_pool_reserve
1539 • Units: threads
1540
1541 • Default: 0
1542
1543 • Maximum: 95% of thread_pool_min
1544
1545 • Flags: delayed
1546
1547 The number of worker threads reserved for vital tasks in each pool.
1548
1549 Tasks may require other tasks to complete (for example, client requests
1550 may require backend requests, http2 sessions require streams, which re‐
1551 quire requests). This reserve is to ensure that lower priority tasks do
1552 not prevent higher priority tasks from running even under high load.
1553
1554 The effective value is at least 5 (the number of internal priority
1555 classes), irrespective of this parameter. Default is 0 to auto-tune
1556 (5% of thread_pool_min). Minimum is 1 otherwise, maximum is 95% of
1557 thread_pool_min.
1558
1559 thread_pool_stack
1560 • Units: bytes
1561
1562 • Default: sysconf(_SC_THREAD_STACK_MIN)
1563
1564 • Minimum: 16k
1565
1566 • Flags: delayed
1567
1568 Worker thread stack size. This will likely be rounded up to a multiple
1569 of 4k (or whatever the page_size might be) by the kernel.
1570
1571 The required stack size is primarily driven by the depth of the
1572 call-tree. The most common relevant determining factors in varnish core
1573 code are GZIP (un)compression, ESI processing and regular expression
1574 matches. VMODs may also require significant amounts of additional
1575 stack. The nesting depth of VCL subs is another factor, although typi‐
1576 cally not predominant.
1577
1578 The stack size is per thread, so the maximum total memory required for
1579 worker thread stacks is in the order of size = thread_pools x
1580 thread_pool_max x thread_pool_stack.
1581
1582 Thus, in particular for setups with many threads, keeping the stack
1583 size at a minimum helps reduce the amount of memory required by Var‐
1584 nish.
1585
1586 On the other hand, thread_pool_stack must be large enough under all
1587 circumstances, otherwise varnish will crash due to a stack overflow.
1588 Usually, a stack overflow manifests itself as a segmentation fault (aka
1589 segfault / SIGSEGV) with the faulting address being near the stack
1590 pointer (sp).
1591
1592 Unless stack usage can be reduced, thread_pool_stack must be increased
1593 when a stack overflow occurs. Setting it in 150%-200% increments is
1594 recommended until stack overflows cease to occur.
1595
1596 thread_pool_timeout
1597 • Units: seconds
1598
1599 • Default: 300.000
1600
1601 • Minimum: 10.000
1602
1603 • Flags: delayed, experimental
1604
1605 Thread idle threshold.
1606
1607 Threads in excess of thread_pool_min, which have been idle for at least
1608 this long, will be destroyed.
1609
1610 thread_pool_watchdog
1611 • Units: seconds
1612
1613 • Default: 60.000
1614
1615 • Minimum: 0.100
1616
1617 • Flags: experimental
1618
1619 Thread queue stuck watchdog.
1620
1621 If no queued work have been released for this long, the worker process
1622 panics itself.
1623
1624 thread_pools
1625 • Units: pools
1626
1627 • Default: 2
1628
1629 • Minimum: 1
1630
1631 • Maximum: defined when Varnish is built
1632
1633 • Flags: delayed, experimental
1634
1635 Number of worker thread pools.
1636
1637 Increasing the number of worker pools decreases lock contention. Each
1638 worker pool also has a thread accepting new connections, so for very
1639 high rates of incoming new connections on systems with many cores, in‐
1640 creasing the worker pools may be required.
1641
1642 Too many pools waste CPU and RAM resources, and more than one pool for
1643 each CPU is most likely detrimental to performance.
1644
1645 Can be increased on the fly, but decreases require a restart to take
1646 effect, unless the drop_pools experimental debug flag is set.
1647
1648 thread_queue_limit
1649 • Default: 20
1650
1651 • Minimum: 0
1652
1653 • Flags: experimental
1654
1655 Permitted request queue length per thread-pool.
1656
1657 This sets the number of requests we will queue, waiting for an avail‐
1658 able thread. Above this limit sessions will be dropped instead of
1659 queued.
1660
1661 thread_stats_rate
1662 • Units: requests
1663
1664 • Default: 10
1665
1666 • Minimum: 0
1667
1668 • Flags: experimental
1669
1670 Worker threads accumulate statistics, and dump these into the global
1671 stats counters if the lock is free when they finish a job (re‐
1672 quest/fetch etc.) This parameters defines the maximum number of jobs a
1673 worker thread may handle, before it is forced to dump its accumulated
1674 stats into the global counters.
1675
1676 timeout_idle
1677 • Units: seconds
1678
1679 • Default: 5.000
1680
1681 • Minimum: 0.000
1682
1683 Idle timeout for client connections.
1684
1685 A connection is considered idle until we have received the full request
1686 headers.
1687
1688 This parameter is particularly relevant for HTTP1 keepalive connec‐
1689 tions which are closed unless the next request is received before this
1690 timeout is reached.
1691
1692 timeout_linger
1693 • Units: seconds
1694
1695 • Default: 0.050
1696
1697 • Minimum: 0.000
1698
1699 • Flags: experimental
1700
1701 How long the worker thread lingers on an idle session before handing it
1702 over to the waiter. When sessions are reused, as much as half of all
1703 reuses happen within the first 100 msec of the previous request com‐
1704 pleting. Setting this too high results in worker threads not doing
1705 anything for their keep, setting it too low just means that more ses‐
1706 sions take a detour around the waiter.
1707
1708 vcc_acl_pedantic
1709 • Units: bool
1710
1711 • Default: off
1712
1713 Insist that network numbers used in ACLs have an all-zero host part,
1714 e.g. make 1.2.3.4/24 an error. With this option set to off (the de‐
1715 fault), the host part of network numbers is being fixed to all-zeroes
1716 (e.g. the above changed to 1.2.3.0/24), a warning is output during VCL
1717 compilation and any ACL entry hits are logged with the fixed address as
1718 "fixed: ..." after the original VCL entry. With this option set to on,
1719 any ACL entries with non-zero host parts cause VCL compilation to fail.
1720
1721 vcc_allow_inline_c
1722 • Units: bool
1723
1724 • Default: off
1725
1726 Allow inline C code in VCL.
1727
1728 vcc_err_unref
1729 • Units: bool
1730
1731 • Default: on
1732
1733 Unreferenced VCL objects result in error.
1734
1735 vcc_unsafe_path
1736 • Units: bool
1737
1738 • Default: on
1739
1740 Allow '/' in vmod & include paths. Allow 'import ... from ...'.
1741
1742 vcl_cooldown
1743 • Units: seconds
1744
1745 • Default: 600.000
1746
1747 • Minimum: 1.000
1748
1749 How long a VCL is kept warm after being replaced as the active VCL
1750 (granularity approximately 30 seconds).
1751
1752 vcl_path
1753 • Default: /opt/varnish/etc/varnish:/opt/varnish/share/varnish/vcl
1754
1755 Directory (or colon separated list of directories) from which relative
1756 VCL filenames (vcl.load and include) are to be found. By default Var‐
1757 nish searches VCL files in both the system configuration and shared
1758 data directories to allow packages to drop their VCL files in a stan‐
1759 dard location where relative includes would work.
1760
1761 vmod_path
1762 • Default: /opt/varnish/lib/varnish/vmods
1763
1764 Directory (or colon separated list of directories) where VMODs are to
1765 be found.
1766
1767 vsl_buffer
1768 • Units: bytes
1769
1770 • Default: 4k
1771
1772 • Minimum: vsl_reclen + 12 bytes
1773
1774 Bytes of (req-/backend-)workspace dedicated to buffering VSL records.
1775 When this parameter is adjusted, most likely workspace_client and
1776 workspace_backend will have to be adjusted by the same amount.
1777
1778 Setting this too high costs memory, setting it too low will cause more
1779 VSL flushes and likely increase lock-contention on the VSL mutex.
1780
1781 vsl_mask
1782 • Default: -Debug,-ObjProtocol,-ObjStatus,-ObjReason,-Obj‐
1783 Header,-VCL_trace,-WorkThread,-Hash,-VfpAcct,-H2RxHdr,-H2Rx‐
1784 Body,-H2TxHdr,-H2TxBody
1785
1786 Mask individual VSL messages from being logged.
1787
1788 default
1789 Set default value
1790
1791 Use +/- prefix in front of VSL tag name to unmask/mask individual VSL
1792 messages.
1793
1794 vsl_reclen
1795 • Units: bytes
1796
1797 • Default: 255b
1798
1799 • Minimum: 16b
1800
1801 • Maximum: vsl_buffer - 12 bytes
1802
1803 Maximum number of bytes in SHM log record.
1804
1805 vsl_space
1806 • Units: bytes
1807
1808 • Default: 80M
1809
1810 • Minimum: 1M
1811
1812 • Maximum: 4G
1813
1814 • Flags: must_restart
1815
1816 The amount of space to allocate for the VSL fifo buffer in the VSM mem‐
1817 ory segment. If you make this too small, varnish{ncsa|log} etc will
1818 not be able to keep up. Making it too large just costs memory re‐
1819 sources.
1820
1821 vsm_free_cooldown
1822 • Units: seconds
1823
1824 • Default: 60.000
1825
1826 • Minimum: 10.000
1827
1828 • Maximum: 600.000
1829
1830 How long VSM memory is kept warm after a deallocation (granularity ap‐
1831 proximately 2 seconds).
1832
1833 vsm_space
1834 • Units: bytes
1835
1836 • Default: 1M
1837
1838 • Minimum: 1M
1839
1840 • Maximum: 1G
1841
1842 DEPRECATED: This parameter is ignored. There is no global limit on
1843 amount of shared memory now.
1844
1845 workspace_backend
1846 • Units: bytes
1847
1848 • Default: 64k
1849
1850 • Minimum: 1k
1851
1852 • Flags: delayed
1853
1854 Bytes of HTTP protocol workspace for backend HTTP req/resp. If larger
1855 than 4k, use a multiple of 4k for VM efficiency.
1856
1857 workspace_client
1858 • Units: bytes
1859
1860 • Default: 64k
1861
1862 • Minimum: 9k
1863
1864 • Flags: delayed
1865
1866 Bytes of HTTP protocol workspace for clients HTTP req/resp. Use a mul‐
1867 tiple of 4k for VM efficiency. For HTTP/2 compliance this must be at
1868 least 20k, in order to receive fullsize (=16k) frames from the client.
1869 That usually happens only in POST/PUT bodies. For other traffic-pat‐
1870 terns smaller values work just fine.
1871
1872 workspace_session
1873 • Units: bytes
1874
1875 • Default: 0.75k
1876
1877 • Minimum: 0.25k
1878
1879 • Flags: delayed
1880
1881 Allocation size for session structure and workspace. The workspace
1882 is primarily used for TCP connection addresses. If larger than 4k, use
1883 a multiple of 4k for VM efficiency.
1884
1885 workspace_thread
1886 • Units: bytes
1887
1888 • Default: 2k
1889
1890 • Minimum: 0.25k
1891
1892 • Maximum: 8k
1893
1894 • Flags: delayed
1895
1896 Bytes of auxiliary workspace per thread. This workspace is used for
1897 certain temporary data structures during the operation of a worker
1898 thread. One use is for the IO-vectors used during delivery. Setting
1899 this parameter too low may increase the number of writev() syscalls,
1900 setting it too high just wastes space. ~0.1k + UIO_MAXIOV *
1901 sizeof(struct iovec) (typically = ~16k for 64bit) is considered the
1902 maximum sensible value under any known circumstances (excluding exotic
1903 vmod use).
1904
1906 Varnish and bundled tools will, in most cases, exit with one of the
1907 following codes
1908
1909 • 0 OK
1910
1911 • 1 Some error which could be system-dependent and/or transient
1912
1913 • 2 Serious configuration / parameter error - retrying with the same
1914 configuration / parameters is most likely useless
1915
1916 The varnishd master process may also OR its exit code
1917
1918 • with 0x20 when the varnishd child process died,
1919
1920 • with 0x40 when the varnishd child process was terminated by a signal
1921 and
1922
1923 • with 0x80 when a core was dumped.
1924
1926 • varnishlog(1)
1927
1928 • varnishhist(1)
1929
1930 • varnishncsa(1)
1931
1932 • varnishstat(1)
1933
1934 • varnishtop(1)
1935
1936 • varnish-cli(7)
1937
1938 • vcl(7)
1939
1941 The varnishd daemon was developed by Poul-Henning Kamp in cooperation
1942 with Verdens Gang AS and Varnish Software.
1943
1944 This manual page was written by Dag-Erling Smørgrav with updates by
1945 Stig Sandbeck Mathisen <ssm@debian.org>, Nils Goroll and others.
1946
1948 This document is licensed under the same licence as Varnish itself. See
1949 LICENCE for details.
1950
1951 • Copyright (c) 2007-2015 Varnish Software AS
1952
1953
1954
1955
1956 VARNISHD(1)