1VMOD_DIRECTORS(3) VMOD_DIRECTORS(3)
2
3
4
6 vmod_directors - Varnish Directors Module
7
9 import directors [from "path"] ;
10
11 new xround_robin = directors.round_robin()
12
13 VOID xround_robin.add_backend(BACKEND)
14
15 VOID xround_robin.remove_backend(BACKEND)
16
17 BACKEND xround_robin.backend()
18
19 new xfallback = directors.fallback(BOOL sticky)
20
21 VOID xfallback.add_backend(BACKEND)
22
23 VOID xfallback.remove_backend(BACKEND)
24
25 BACKEND xfallback.backend()
26
27 new xrandom = directors.random()
28
29 VOID xrandom.add_backend(BACKEND, REAL)
30
31 VOID xrandom.remove_backend(BACKEND)
32
33 BACKEND xrandom.backend()
34
35 new xhash = directors.hash()
36
37 VOID xhash.add_backend(BACKEND, REAL)
38
39 VOID xhash.remove_backend(BACKEND)
40
41 BACKEND xhash.backend(STRING)
42
43 new xshard = directors.shard()
44
45 VOID xshard.set_warmup(REAL probability)
46
47 VOID xshard.set_rampup(DURATION duration)
48
49 VOID xshard.associate(BLOB param)
50
51 BOOL xshard.add_backend(BACKEND backend, [STRING ident], [DURATION rampup])
52
53 BOOL xshard.remove_backend([BACKEND backend], [STRING ident])
54
55 BOOL xshard.clear()
56
57 BOOL xshard.reconfigure(INT replicas)
58
59 INT xshard.key(STRING)
60
61 BACKEND xshard.backend([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy], [BLOB param], [ENUM resolve])
62
63 VOID xshard.debug(INT)
64
65 new xshard_param = directors.shard_param()
66
67 VOID xshard_param.clear()
68
69 VOID xshard_param.set([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy])
70
71 STRING xshard_param.get_by()
72
73 INT xshard_param.get_key()
74
75 INT xshard_param.get_alt()
76
77 REAL xshard_param.get_warmup()
78
79 BOOL xshard_param.get_rampup()
80
81 STRING xshard_param.get_healthy()
82
83 BLOB xshard_param.use()
84
86 vmod_directors enables backend load balancing in Varnish.
87
88 The module implements load balancing techniques, and also serves as an
89 example on how one could extend the load balancing capabilities of Var‐
90 nish.
91
92 To enable load balancing you must import this vmod (directors).
93
94 Then you define your backends. Once you have the backends declared you
95 can add them to a director. This happens in executed VCL code. If you
96 want to emulate the previous behavior of Varnish 3.0 you can just ini‐
97 tialize the directors in vcl_init, like this:
98
99 sub vcl_init {
100 new vdir = directors.round_robin();
101 vdir.add_backend(backend1);
102 vdir.add_backend(backend2);
103 }
104
105 As you can see there is nothing keeping you from manipulating the
106 directors elsewhere in VCL. So, you could have VCL code that would add
107 more backends to a director when a certain URL is called.
108
109 Note that directors can use other directors as backends.
110
111 new xround_robin = directors.round_robin()
112 Description
113 Create a round robin director.
114
115 This director will pick backends in a round robin fashion.
116
117 Example
118 new vdir = directors.round_robin();
119
120 VOID xround_robin.add_backend(BACKEND)
121 Description
122 Add a backend to the round-robin director.
123
124 Example
125 vdir.add_backend(backend1);
126
127 VOID xround_robin.remove_backend(BACKEND)
128 Description
129 Remove a backend from the round-robin director.
130
131 Example
132 vdir.remove_backend(backend1);
133
134 BACKEND xround_robin.backend()
135 Description
136 Pick a backend from the director.
137
138 Example
139 set req.backend_hint = vdir.backend();
140
141 new xfallback = directors.fallback(BOOL sticky=0)
142 Description
143 Create a fallback director.
144
145 A fallback director will try each of the added backends in turn,
146 and return the first one that is healthy.
147
148 If sticky is set to true, the director will keep using the
149 healthy backend, even if a higher-priority backend becomes
150 available. Once the whole backend list is exhausted, it'll start
151 over at the beginning.
152
153 Example
154 new vdir = directors.fallback();
155
156 VOID xfallback.add_backend(BACKEND)
157 Description
158 Add a backend to the director.
159
160 Note that the order in which this is done matters for the fall‐
161 back director.
162
163 Example
164 vdir.add_backend(backend1);
165
166 VOID xfallback.remove_backend(BACKEND)
167 Description
168 Remove a backend from the director.
169
170 Example
171 vdir.remove_backend(backend1);
172
173 BACKEND xfallback.backend()
174 Description
175 Pick a backend from the director.
176
177 Example
178 set req.backend_hint = vdir.backend();
179
180 new xrandom = directors.random()
181 Description
182 Create a random backend director.
183
184 The random director distributes load over the backends using a
185 weighted random probability distribution. The "testable" random
186 generator in varnishd is used, which enables deterministic tests
187 to be run (See: d00004.vtc).
188
189 Example
190 new vdir = directors.random();
191
192 VOID xrandom.add_backend(BACKEND, REAL)
193 Description
194 Add a backend to the director with a given weight.
195
196 Each backend will receive approximately 100 * (weight /
197 (sum(all_added_weights))) per cent of the traffic sent to this
198 director.
199
200 Example
201 # 2/3 to backend1, 1/3 to backend2.
202 vdir.add_backend(backend1, 10.0);
203 vdir.add_backend(backend2, 5.0);
204
205
206 VOID xrandom.remove_backend(BACKEND)
207 Description
208 Remove a backend from the director.
209
210 Example
211 vdir.remove_backend(backend1);
212
213 BACKEND xrandom.backend()
214 Description
215 Pick a backend from the director.
216
217 Example
218 set req.backend_hint = vdir.backend();
219
220 new xhash = directors.hash()
221 Description
222 Create a hashing backend director.
223
224 The director chooses the backend server by computing a
225 hash/digest of the string given to .backend().
226
227 Commonly used with client.ip or a session cookie to get sticky
228 sessions.
229
230 Example
231 new vdir = directors.hash();
232
233 VOID xhash.add_backend(BACKEND, REAL)
234 Description
235 Add a backend to the director with a certain weight.
236
237 Weight is used as in the random director. Recommended value is
238 1.0 unless you have special needs.
239
240 Example
241 vdir.add_backend(backend1, 1.0);
242
243 VOID xhash.remove_backend(BACKEND)
244 Description
245 Remove a backend from the director.
246
247 Example
248 vdir.remove_backend(backend1);
249
250 BACKEND xhash.backend(STRING)
251 Description
252 Pick a backend from the backend director.
253
254 Use the string or list of strings provided to pick the backend.
255
256 Example
257 # pick a backend based on the cookie header from the client
258 set req.backend_hint = vdir.backend(req.http.cookie);
259
260
261 new xshard = directors.shard()
262 Create a shard director.
263
264 Note that the shard director needs to be configured using at least one
265 shard.add_backend() call(s) followed by a shard.reconfigure() call
266 before it can hand out backends.
267
268 _Note_ that due to various restrictions (documented below), it is rec‐
269 ommended to use the shard director on the backend side.
270
271 Introduction
272 The shard director selects backends by a key, which can be provided
273 directly or derived from strings. For the same key, the shard director
274 will always return the same backend, unless the backend configuration
275 or health state changes. Conversely, for differing keys, the shard
276 director will likely choose different backends. In the default configu‐
277 ration, unhealthy backends are not selected.
278
279 The shard director resembles the hash director, but its main advantage
280 is that, when the backend configuration or health states change, the
281 association of keys to backends remains as stable as possible.
282
283 In addition, the rampup and warmup features can help to further improve
284 user-perceived response times.
285
286 Sharding
287 This basic technique allows for numerous applications like optimizing
288 backend server cache efficiency, Varnish clustering or persisting ses‐
289 sions to servers without keeping any state, and, in particular, without
290 the need to synchronize state between nodes of a cluster of Varnish
291 servers:
292
293 · Many applications use caches for data objects, so, in a cluster of
294 application servers, requesting similar objects from the same server
295 may help to optimize efficiency of such caches.
296
297 For example, sharding by URL or some id component of the url has been
298 shown to drastically improve the efficiency of many content manage‐
299 ment systems.
300
301 · As special case of the previous example, in clusters of Varnish
302 servers without additional request distribution logic, each cache
303 will need store all hot objects, so the effective cache size is
304 approximately the smallest cache size of any server in the cluster.
305
306 Sharding allows to segregate objects within the cluster such that
307 each object is only cached on one of the servers (or on one primary
308 and one backup, on a primary for long and others for short etc...).
309 Effectively, this will lead to a cache size in the order of the sum
310 of all individual caches, with the potential to drastically increase
311 efficiency (scales by the number of servers).
312
313 · Another application is to implement persistence of backend requests,
314 such that all requests sharing a certain criterion (such as an IP
315 address or session ID) get forwarded to the same backend server.
316
317 When used with clusters of varnish servers, the shard director will, if
318 otherwise configured equally, make the same decision on all servers. In
319 other words, requests sharing a common criterion used as the shard key
320 will be balanced onto the same backend server(s) no matter which Var‐
321 nish server handles the request.
322
323 The drawbacks are:
324
325 · the distribution of requests depends on the number of requests per
326 key and the uniformity of the distribution of key values. In short,
327 while this technique may lead to much better efficiency overall, it
328 may also lead to less good load balancing for specific cases.
329
330 · When a backend server becomes unavailable, every persistence tech‐
331 nique has to reselect a new backend server, but this technique will
332 also switch back to the preferred server once it becomes healthy
333 again, so when used for persistence, it is generally less stable com‐
334 pared to stateful techniques (which would continue to use a selected
335 server for as long as possible (or dictated by a TTL)).
336
337 Method
338 When .reconfigure() is called, a consistent hashing circular data
339 structure gets built from the last 32 bits of SHA256 hash values of
340 <ident><n> (default ident being the backend name) for each backend and
341 for a running number n from 1 to replicas. Hashing creates the seem‐
342 ingly random order for placement of backends on the consistent hashing
343 ring.
344
345 When .backend() is called, a load balancing key gets generated unless
346 provided. The smallest hash value in the circle is looked up that is
347 larger than the key (searching clockwise and wrapping around as neces‐
348 sary). The backend for this hash value is the preferred backend for the
349 given key.
350
351 If a healthy backend is requested, the search is continued linearly on
352 the ring as long as backends found are unhealthy or all backends have
353 been checked. The order of these "alternative backends" on the ring is
354 likely to differ for different keys. Alternative backends can also be
355 selected explicitly.
356
357 On consistent hashing see:
358
359 · http://www8.org/w8-papers/2a-webserver/caching/paper2.html
360
361 · http://www.audioscrobbler.net/development/ketama/
362
363 · svn://svn.audioscrobbler.net/misc/ketama
364
365 · http://en.wikipedia.org/wiki/Consistent_hashing
366
367 Error Reporting
368 Failing methods should report errors to VSL with the Error tag, so when
369 configuring the shard director, you are advised to check:
370
371 varnishlog -I Error:^shard
372
373 VOID xshard.set_warmup(REAL probability=0.0)
374 Set the default warmup probability. See the warmup parameter of
375 shard.backend(). If probability is 0.0 (default), warmup is disabled.
376
377 VOID xshard.set_rampup(DURATION duration=0)
378 Set the default rampup duration. See rampup parameter of shard.back‐
379 end(). If duration is 0 (default), rampup is disabled.
380
381 VOID xshard.associate(BLOB param=0)
382 Associate a default obj_shard_param object or clear an association.
383
384 The value of the param argument must be a call to the
385 func_shard_param.use method. No argument clears the association.
386
387 The association can be changed per backend request using the param
388 argument of func_shard.backend.
389
390 shard.add_backend(...)
391 BOOL xshard.add_backend(
392 BACKEND backend,
393 [STRING ident],
394 [DURATION rampup]
395 )
396
397 Add a backend backend to the director.
398
399 ident: Optionally specify an identification string for this backend,
400 which will be hashed by shard.reconfigure() to construct the consistent
401 hashing ring. The identification string defaults to the backend name.
402
403 ident allows to add multiple instances of the same backend.
404
405 rampup: Optionally specify a specific rampup time for this backend.
406 Otherwise, the per-director rampup time is used (see
407 func_shard.set_rampup).
408
409 NOTE: Backend changes need to be finalized with shard.reconfigure() and
410 are only supported on one shard director at a time.
411
412 shard.remove_backend(...)
413 BOOL xshard.remove_backend(
414 [BACKEND backend=0],
415 [STRING ident=0]
416 )
417
418 Remove backend(s) from the director. Either backend or ident must be
419 specified. ident removes a specific instance. If backend is given with‐
420 out ident, all instances of this backend are removed.
421
422 NOTE: Backend changes need to be finalized with shard.reconfigure() and
423 are only supported on one shard director at a time.
424
425 BOOL xshard.clear()
426 Remove all backends from the director.
427
428 NOTE: Backend changes need to be finalized with shard.reconfigure() and
429 are only supported on one shard director at a time.
430
431 BOOL xshard.reconfigure(INT replicas=67)
432 Reconfigure the consistent hashing ring to reflect backend changes.
433
434 This method must be called at least once before the director can be
435 used.
436
437 INT xshard.key(STRING)
438 Convenience method to generate a sharding key for use with the key
439 argument to the shard.backend() method by hashing the given string with
440 SHA256.
441
442 To generate sharding keys using other hashes, use a custom vmod like
443 vmod blobdigest with the key_blob argument of the shard.backend()
444 method.
445
446 shard.backend(...)
447 BACKEND xshard.backend(
448 [ENUM {HASH, URL, KEY, BLOB} by=HASH],
449 [INT key],
450 [BLOB key_blob],
451 [INT alt=0],
452 [REAL warmup=-1],
453 [BOOL rampup=1],
454 [ENUM {CHOSEN, IGNORE, ALL} healthy=CHOSEN],
455 [BLOB param],
456 [ENUM {NOW, LAZY} resolve]
457 )
458
459 Lookup a backend on the consistent hashing ring.
460
461 This documentation uses the notion of an order of backends for a par‐
462 ticular shard key. This order is deterministic but seemingly random as
463 determined by the consistent hashing algorithm and is likely to differ
464 for different keys, depending on the number of backends and the number
465 of replicas. In particular, the backend order referred to here is _not_
466 the order given when backends are added.
467
468 · by how to determine the sharding key
469
470 · HASH:
471
472 · when called in backend context: Use the varnish hash value as set
473 by vcl_hash
474
475 · when called in client context: hash req.url
476
477 · URL: hash req.url / bereq.url
478
479 · KEY: use the key argument
480
481 · BLOB: use the key_blob argument
482
483 · key lookup key with by=KEY
484
485 the shard.key() function may come handy to generate a sharding key
486 from custom strings.
487
488 · key_blob lookup key with by=BLOB
489
490 Currently, this uses the first 4 bytes from the given blob in network
491 byte order (big endian), left-padded with zeros for blobs smaller
492 than 4 bytes.
493
494 · alt alternative backend selection
495
496 Select the alt-th alternative backend for the given key.
497
498 This is particularly useful for retries / restarts due to backend
499 errors: By setting alt=req.restarts or alt=bereq.retries with
500 healthy=ALL, another server gets selected.
501
502 The rampup and warmup features are only active for alt==0
503
504 · rampup slow start for servers which just went healthy
505
506 If alt==0 and the chosen backend is in its rampup period, with a
507 probability proportional to the fraction of time since the backup
508 became healthy to the rampup period, return the next alternative
509 backend, unless this is also in its rampup period.
510
511 The default rampup interval can be set per shard director using the
512 set_rampup() method or specifically per backend with the set_back‐
513 end() method.
514
515 · warmup probabilistic alternative server selection
516
517 possible values: -1, 0..1
518
519 -1: use the warmup probability from the director definition
520
521 Only used for alt==0: Sets the ratio of requests (0.0 to 1.0) that
522 goes to the next alternate backend to warm it up when the preferred
523 backend is healthy. Not active if any of the preferred or alternative
524 backend are in rampup.
525
526 warmup=0.5 is a convenient way to spread the load for each key over
527 two backends under normal operating conditions.
528
529 · healthy
530
531 · CHOSEN: Return a healthy backend if possible.
532
533 For alt==0, return the first healthy backend or none.
534
535 For alt > 0, ignore the health state of backends skipped for alter‐
536 native backend selection, then return the next healthy backend. If
537 this does not exist, return the last healthy backend of those
538 skipped or none.
539
540 · IGNORE: Completely ignore backend health state
541
542 Just return the first or alt-th alternative backend, ignoring
543 health state. Ignore rampup and warmup.
544
545 · ALL: Check health state also for alternative backend selection
546
547 For alt > 0, return the alt-th alternative backend of all those
548 healthy, the last healthy backend found or none.
549
550 · resolve
551
552 default: LAZY in vcl_init{}, NOW otherwise
553
554 · NOW: look up a backend and return it.
555
556 Can not be used in vcl_init{}.
557
558 · LAZY: return an instance of this director for later backend resolu‐
559 tion.
560
561 LAZY mode is required for referencing shard director instances, for
562 example as backends for other directors (director layering).
563
564 In vcl_init{} and on the client side, LAZY mode can not be used
565 with any other argument.
566
567 On the backend side, parameters from arguments or an associated
568 parameter set affect the shard director instance for the backend
569 request irrespective of where it is referenced.
570
571 · param
572
573 Use or associate a parameter set. The value of the param argument
574 must be a call to the func_shard_param.use method.
575
576 default: as set by func_shard.associate or unset.
577
578 · for resolve=NOW take parameter defaults from the obj_shard_param
579 parameter set
580
581 · for resolve=LAZY associate the obj_shard_param parameter set for
582 this backend request
583
584 Implementation notes for use of parameter sets with resolve=LAZY:
585
586 · A param argument remains associated and any changes to the asso‐
587 ciated parameter set affect the sharding decision once the direc‐
588 tor resolves to an actual backend.
589
590 · If other parameter arguments are also given, they have preference
591 and are kept even if the parameter set given by the param argu‐
592 ment is subsequently changed within the same backend request.
593
594 · Each call to func_shard.backend overrides any previous call.
595
596 VOID xshard.debug(INT)
597 intentionally undocumented
598
599 new xshard_param = directors.shard_param()
600 Create a shard parameter set.
601
602 A parameter set allows for re-use of func_shard.backend arguments
603 across many shard director instances and simplifies advanced use cases
604 (e.g. shard director with custom parameters layered below other direc‐
605 tors).
606
607 Parameter sets have two scopes:
608
609 · per-VCL scope defined in vcl_init{}
610
611 · per backend request scope
612
613 The per-VCL scope defines defaults for the per backend scope. Any
614 changes to a parameter set in backend context only affect the respec‐
615 tive backend request.
616
617 Parameter sets can not be used in client context.
618
619 VOID xshard_param.clear()
620 Reset the parameter set to default values as documented for
621 func_shard.backend.
622
623 · in vcl_init{}, resets the parameter set default for this VCL
624
625 · in backend context, resets the parameter set for this backend request
626 to the VCL defaults
627
628 This method may not be used in client context
629
630 shard_param.set(...)
631 VOID xshard_param.set(
632 [ENUM {HASH, URL, KEY, BLOB} by],
633 [INT key],
634 [BLOB key_blob],
635 [INT alt],
636 [REAL warmup],
637 [BOOL rampup],
638 [ENUM {CHOSEN, IGNORE, ALL} healthy]
639 )
640
641 Change the given parameters of a parameter set as documented for
642 func_shard.backend.
643
644 · in vcl_init{}, changes the parameter set default for this VCL
645
646 · in backend context, changes the parameter set for this backend
647 request, keeping the defaults set for this VCL for unspecified argu‐
648 ments.
649
650 This method may not be used in client context
651
652 STRING xshard_param.get_by()
653 Get a string representation of the by enum argument which denotes how a
654 shard director using this parameter object would derive the shard key.
655 See func_shard.backend.
656
657 INT xshard_param.get_key()
658 Get the key which a shard director using this parameter object would
659 use. See func_shard.backend.
660
661 INT xshard_param.get_alt()
662 Get the alt parameter which a shard director using this parameter
663 object would use. See func_shard.backend.
664
665 REAL xshard_param.get_warmup()
666 Get the warmup parameter which a shard director using this parameter
667 object would use. See func_shard.backend.
668
669 BOOL xshard_param.get_rampup()
670 Get the rampup parameter which a shard director using this parameter
671 object would use. See func_shard.backend.
672
673 STRING xshard_param.get_healthy()
674 Get a string representation of the healthy enum argument which a shard
675 director using this parameter object would use. See func_shard.backend.
676
677 BLOB xshard_param.use()
678 This method may only be used in backend context.
679
680 For use with the param argument of func_shard.backend to associate this
681 shard parameter set with a shard director.
682
684 Development of a previous version of the shard director was partly
685 sponsored by Deutsche Telekom AG - Products & Innovation.
686
687 Development of a previous version of the shard director was partly
688 sponsored by BILD GmbH & Co KG.
689
691 This document is licensed under the same licence as Varnish
692 itself. See LICENCE for details.
693
694 Copyright (c) 2013-2015 Varnish Software AS
695 Copyright 2009-2018 UPLEX - Nils Goroll Systemoptimierung
696 All rights reserved.
697
698 Authors: Poul-Henning Kamp <phk@FreeBSD.org>
699 Julian Wiesener <jw@uplex.de>
700 Nils Goroll <slink@uplex.de>
701 Geoffrey Simmons <geoff@uplex.de>
702
703 Redistribution and use in source and binary forms, with or without
704 modification, are permitted provided that the following conditions
705 are met:
706 1. Redistributions of source code must retain the above copyright
707 notice, this list of conditions and the following disclaimer.
708 2. Redistributions in binary form must reproduce the above copyright
709 notice, this list of conditions and the following disclaimer in the
710 documentation and/or other materials provided with the distribution.
711
712 THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
713 ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
714 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
715 ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
716 FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
717 DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
718 OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
719 HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
720 LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
721 OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
722 SUCH DAMAGE.
723
724
725
726
727 VMOD_DIRECTORS(3)