1VMOD_DIRECTORS(3) VMOD_DIRECTORS(3)
2
3
4
6 vmod_directors - Varnish Directors Module
7
9 import directors [as name] [from "path"]
10
11 new xround_robin = directors.round_robin()
12
13 VOID xround_robin.add_backend(BACKEND)
14
15 VOID xround_robin.remove_backend(BACKEND)
16
17 BACKEND xround_robin.backend()
18
19 new xfallback = directors.fallback(BOOL sticky)
20
21 VOID xfallback.add_backend(BACKEND)
22
23 VOID xfallback.remove_backend(BACKEND)
24
25 BACKEND xfallback.backend()
26
27 new xrandom = directors.random()
28
29 VOID xrandom.add_backend(BACKEND, REAL)
30
31 VOID xrandom.remove_backend(BACKEND)
32
33 BACKEND xrandom.backend()
34
35 new xhash = directors.hash()
36
37 VOID xhash.add_backend(BACKEND, REAL weight)
38
39 VOID xhash.remove_backend(BACKEND)
40
41 BACKEND xhash.backend(STRING)
42
43 new xshard = directors.shard()
44
45 VOID xshard.set_warmup(REAL probability)
46
47 VOID xshard.set_rampup(DURATION duration)
48
49 VOID xshard.associate(BLOB param)
50
51 BOOL xshard.add_backend(BACKEND backend, [STRING ident], [DURATION rampup], [REAL weight])
52
53 BOOL xshard.remove_backend([BACKEND backend], [STRING ident])
54
55 BOOL xshard.clear()
56
57 BOOL xshard.reconfigure(INT replicas)
58
59 INT xshard.key(STRING)
60
61 BACKEND xshard.backend([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy], [BLOB param], [ENUM resolve])
62
63 VOID xshard.debug(INT)
64
65 new xshard_param = directors.shard_param()
66
67 VOID xshard_param.clear()
68
69 VOID xshard_param.set([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy])
70
71 STRING xshard_param.get_by()
72
73 INT xshard_param.get_key()
74
75 INT xshard_param.get_alt()
76
77 REAL xshard_param.get_warmup()
78
79 BOOL xshard_param.get_rampup()
80
81 STRING xshard_param.get_healthy()
82
83 BLOB xshard_param.use()
84
85 BACKEND lookup(STRING)
86
88 vmod_directors enables backend load balancing in Varnish.
89
90 The module implements load balancing techniques, and also serves as an
91 example on how one could extend the load balancing capabilities of Var‐
92 nish.
93
94 To enable load balancing you must import this vmod (directors).
95
96 Then you define your backends. Once you have the backends declared you
97 can add them to a director. This happens in executed VCL code. If you
98 want to emulate the previous behavior of Varnish 3.0 you can just ini‐
99 tialize the directors in vcl_init{}, like this:
100
101 sub vcl_init {
102 new vdir = directors.round_robin();
103 vdir.add_backend(backend1);
104 vdir.add_backend(backend2);
105 }
106
107 As you can see there is nothing keeping you from manipulating the di‐
108 rectors elsewhere in VCL. So, you could have VCL code that would add
109 more backends to a director when a certain URL is called.
110
111 Note that directors can use other directors as backends.
112
113 new xround_robin = directors.round_robin()
114 Create a round robin director.
115
116 This director will pick backends in a round robin fashion.
117
118 Example:
119
120 new vdir = directors.round_robin();
121
122 VOID xround_robin.add_backend(BACKEND)
123 Add a backend to the round-robin director.
124
125 Example:
126
127 vdir.add_backend(backend1);
128
129 VOID xround_robin.remove_backend(BACKEND)
130 Remove a backend from the round-robin director.
131
132 Example:
133
134 vdir.remove_backend(backend1);
135
136 BACKEND xround_robin.backend()
137 Pick a backend from the director.
138
139 Example:
140
141 set req.backend_hint = vdir.backend();
142
143 new xfallback = directors.fallback(BOOL sticky=0)
144 Create a fallback director.
145
146 A fallback director will try each of the added backends in turn, and
147 return the first one that is healthy.
148
149 If sticky is set to true, the director will keep using the healthy
150 backend, even if a higher-priority backend becomes available. Once the
151 whole backend list is exhausted, it'll start over at the beginning.
152
153 Example:
154
155 new vdir = directors.fallback();
156
157 VOID xfallback.add_backend(BACKEND)
158 Add a backend to the director.
159
160 Note that the order in which this is done matters for the fallback di‐
161 rector.
162
163 Example:
164
165 vdir.add_backend(backend1);
166
167 VOID xfallback.remove_backend(BACKEND)
168 Remove a backend from the director.
169
170 Example:
171
172 vdir.remove_backend(backend1);
173
174 BACKEND xfallback.backend()
175 Pick a backend from the director.
176
177 Example:
178
179 set req.backend_hint = vdir.backend();
180
181 new xrandom = directors.random()
182 Create a random backend director.
183
184 The random director distributes load over the backends using a weighted
185 random probability distribution.
186
187 The "testable" random generator in varnishd is used, which enables de‐
188 terministic tests to be run (See: d00004.vtc).
189
190 Example:
191
192 new vdir = directors.random();
193
194 VOID xrandom.add_backend(BACKEND, REAL)
195 Add a backend to the director with a given weight.
196
197 Each backend will receive approximately 100 * (weight /
198 (sum(all_added_weights))) per cent of the traffic sent to this direc‐
199 tor.
200
201 Example:
202
203 # 2/3 to backend1, 1/3 to backend2.
204 vdir.add_backend(backend1, 10.0);
205 vdir.add_backend(backend2, 5.0);
206
207 VOID xrandom.remove_backend(BACKEND)
208 Remove a backend from the director.
209
210 Example:
211
212 vdir.remove_backend(backend1);
213
214 BACKEND xrandom.backend()
215 Pick a backend from the director.
216
217 Example:
218
219 set req.backend_hint = vdir.backend();
220
221 new xhash = directors.hash()
222 Create a hashing backend director.
223
224 The director chooses the backend server by computing a hash/digest of
225 the string given to xhash.backend().
226
227 Commonly used with client.ip or a session cookie to get sticky ses‐
228 sions.
229
230 Example:
231
232 new vdir = directors.hash();
233
234 VOID xhash.add_backend(BACKEND, REAL weight=1.0)
235 Add a backend to the director with a certain weight.
236
237 Weight is used as in the random director. Recommended and default value
238 is 1.0 unless you have special needs.
239
240 Example:
241
242 vdir.add_backend(normal_backend);
243 vdir.add_backend(larger_backend, 1.5);
244
245 VOID xhash.remove_backend(BACKEND)
246 Remove a backend from the director.
247
248 Example::
249 vdir.remove_backend(larger_backend);
250
251 BACKEND xhash.backend(STRING)
252 Pick a backend from the hash director.
253
254 Use the string or list of strings provided to pick the backend.
255
256 Example::
257 # pick a backend based on the cookie header from the client set
258 req.backend_hint = vdir.backend(req.http.cookie);
259
260 new xshard = directors.shard()
261 Create a shard director.
262
263 Introduction
264 The shard director selects backends by a key, which can be provided di‐
265 rectly or derived from strings. For the same key, the shard director
266 will always return the same backend, unless the backend configuration
267 or health state changes. Conversely, for differing keys, the shard di‐
268 rector will likely choose different backends. In the default configura‐
269 tion, unhealthy backends are not selected.
270
271 The shard director resembles the hash director, but its main advantage
272 is that, when the backend configuration or health states change, the
273 association of keys to backends remains as stable as possible.
274
275 In addition, the rampup and warmup features can help to further improve
276 user-perceived response times.
277
278 Sharding
279 This basic technique allows for numerous applications like optimizing
280 backend server cache efficiency, Varnish clustering or persisting ses‐
281 sions to servers without keeping any state, and, in particular, without
282 the need to synchronize state between nodes of a cluster of Varnish
283 servers:
284
285 • Many applications use caches for data objects, so, in a cluster of
286 application servers, requesting similar objects from the same server
287 may help to optimize efficiency of such caches.
288
289 For example, sharding by URL or some id component of the url has been
290 shown to drastically improve the efficiency of many content manage‐
291 ment systems.
292
293 • As special case of the previous example, in clusters of Varnish
294 servers without additional request distribution logic, each cache
295 will need store all hot objects, so the effective cache size is ap‐
296 proximately the smallest cache size of any server in the cluster.
297
298 Sharding allows to segregate objects within the cluster such that
299 each object is only cached on one of the servers (or on one primary
300 and one backup, on a primary for long and others for short etc...).
301 Effectively, this will lead to a cache size in the order of the sum
302 of all individual caches, with the potential to drastically increase
303 efficiency (scales by the number of servers).
304
305 • Another application is to implement persistence of backend requests,
306 such that all requests sharing a certain criterion (such as an IP ad‐
307 dress or session ID) get forwarded to the same backend server.
308
309 When used with clusters of varnish servers, the shard director will, if
310 otherwise configured equally, make the same decision on all servers. In
311 other words, requests sharing a common criterion used as the shard key
312 will be balanced onto the same backend server(s) no matter which Var‐
313 nish server handles the request.
314
315 The drawbacks are:
316
317 • the distribution of requests depends on the number of requests per
318 key and the uniformity of the distribution of key values. In short,
319 while this technique may lead to much better efficiency overall, it
320 may also lead to less good load balancing for specific cases.
321
322 • When a backend server becomes unavailable, every persistence tech‐
323 nique has to reselect a new backend server, but this technique will
324 also switch back to the preferred server once it becomes healthy
325 again, so when used for persistence, it is generally less stable com‐
326 pared to stateful techniques (which would continue to use a selected
327 server for as long as possible (or dictated by a TTL)).
328
329 Method
330 When xshard.reconfigure() is called explicitly (or implicitly at the
331 end of any task containing reconfigurations like xshard.add_backend()),
332 a consistent hashing circular data structure gets built from the last
333 32 bits of SHA256 hash values of <ident><n> (default ident being the
334 backend name) for each backend and for a running number n from 1 to the
335 replicas argument to xshard.reconfigure(). Hashing creates the seem‐
336 ingly random order for placement of backends on the consistent hashing
337 ring. When xshard.add_backend() was called with a weight argument,
338 replicas is scaled by that weight to add proportionally more copies of
339 the that backend on the ring.
340
341 When xshard.backend() is called, a load balancing key gets generated
342 unless provided. The smallest hash value in the circle is looked up
343 that is larger than the key (searching clockwise and wrapping around as
344 necessary). The backend for this hash value is the preferred backend
345 for the given key.
346
347 If a healthy backend is requested, the search is continued linearly on
348 the ring as long as backends found are unhealthy or all backends have
349 been checked. The order of these "alternative backends" on the ring is
350 likely to differ for different keys. Alternative backends can also be
351 selected explicitly.
352
353 On consistent hashing see:
354
355 • http://www8.org/w8-papers/2a-webserver/caching/paper2.html
356
357 • http://www.audioscrobbler.net/development/ketama/
358
359 • svn://svn.audioscrobbler.net/misc/ketama
360
361 • http://en.wikipedia.org/wiki/Consistent_hashing
362
363 Error Reporting
364 Failing methods should report errors to VSL with the Error tag, so when
365 configuring the shard director, you are advised to check:
366
367 varnishlog -I Error:^vmod_directors.shard
368
369 Additional information may be provided as Notices, which can be checked
370 using
371 varnishlog -I Notice:^vmod_directors.shard
372
373 VOID xshard.set_warmup(REAL probability=0.0)
374 Set the default warmup probability. See the warmup parameter of
375 xshard.backend(). If probability is 0.0 (default), warmup is disabled.
376
377 VOID xshard.set_rampup(DURATION duration=0)
378 Set the default rampup duration. See rampup parameter of
379 xshard.backend(). If duration is 0 (default), rampup is disabled.
380
381 VOID xshard.associate(BLOB param=0)
382 Associate a default directors.shard_param() object or clear an associa‐
383 tion.
384
385 The value of the param argument must be a call to the
386 xshard_param.use() method. No argument clears the association.
387
388 The association can be changed per backend request using the param ar‐
389 gument of xshard.backend().
390
391 BOOL xshard.add_backend(BACKEND backend, [STRING ident], [DURATION rampup],
392 [REAL weight])
393 BOOL xshard.add_backend(
394 BACKEND backend,
395 [STRING ident],
396 [DURATION rampup],
397 [REAL weight]
398 )
399
400 Add a backend backend to the director.
401
402 ident: Optionally specify an identification string for this backend,
403 which will be hashed by xshard.reconfigure() to construct the consis‐
404 tent hashing ring. The identification string defaults to the backend
405 name.
406
407 ident allows to add multiple instances of the same backend.
408
409 rampup: Optionally specify a specific rampup time for this backend.
410 Otherwise, the per-director rampup time is used (see
411 xshard.set_rampup()).
412
413 weight: Optionally specify a weight to scale the xshard.reconfigure()
414 replicas parameter. weight is limited to at least 1. Values above 10
415 probably do not make much sense. The effect of weight is also capped
416 such that the total number of replicas does not exceed UINT32_MAX.
417
418 BOOL xshard.remove_backend([BACKEND backend], [STRING ident])
419 BOOL xshard.remove_backend(
420 [BACKEND backend=0],
421 [STRING ident=0]
422 )
423
424 Remove backend(s) from the director. Either backend or ident must be
425 specified. ident removes a specific instance. If backend is given with‐
426 out ident, all instances of this backend are removed.
427
428 BOOL xshard.clear()
429 Remove all backends from the director.
430
431 BOOL xshard.reconfigure(INT replicas=67)
432 Explicitly reconfigure the consistent hashing ring to reflect backend
433 changes to become effective immediately.
434
435 If this method is not called explicitly, reconfiguration happens at the
436 end of the current task (after vcl_init {} or when the current client
437 or backend task is finished).
438
439 INT xshard.key(STRING)
440 Convenience method to generate a sharding key for use with the key ar‐
441 gument to the xshard.backend() method by hashing the given string with
442 SHA256.
443
444 To generate sharding keys using other hashes, use a custom vmod like
445 vmod blobdigest with the key_blob argument of the xshard.backend()
446 method.
447
448 BACKEND xshard.backend([ENUM by], [INT key], [BLOB key_blob], [INT alt],
449 [REAL warmup], [BOOL rampup], [ENUM healthy], [BLOB param], [ENUM re‐
450 solve])
451 BACKEND xshard.backend(
452 [ENUM {HASH, URL, KEY, BLOB} by=HASH],
453 [INT key],
454 [BLOB key_blob],
455 [INT alt=0],
456 [REAL warmup=-1],
457 [BOOL rampup=1],
458 [ENUM {CHOSEN, IGNORE, ALL} healthy=CHOSEN],
459 [BLOB param],
460 [ENUM {NOW, LAZY} resolve]
461 )
462
463 Lookup a backend on the consistent hashing ring.
464
465 This documentation uses the notion of an order of backends for a par‐
466 ticular shard key. This order is deterministic but seemingly random as
467 determined by the consistent hashing algorithm and is likely to differ
468 for different keys, depending on the number of backends and the number
469 of replicas. In particular, the backend order referred to here is _not_
470 the order given when backends are added.
471
472 • by how to determine the sharding key
473
474 • HASH:
475
476 • when called in backend context and in vcl_pipe {}: Use the var‐
477 nish hash value as set by vcl_hash{}
478
479 • when called in client context other than vcl_pipe {}: hash
480 req.url
481
482 • URL: hash req.url / bereq.url
483
484 • KEY: use the key argument
485
486 • BLOB: use the key_blob argument
487
488 • key lookup key with by=KEY
489
490 the xshard.key() method may come handy to generate a sharding key
491 from custom strings.
492
493 • key_blob lookup key with by=BLOB
494
495 Currently, this uses the first 4 bytes from the given blob in network
496 byte order (big endian), left-padded with zeros for blobs smaller
497 than 4 bytes.
498
499 • alt alternative backend selection
500
501 Select the alt-th alternative backend for the given key.
502
503 This is particularly useful for retries / restarts due to backend er‐
504 rors: By setting alt=req.restarts or alt=bereq.retries with
505 healthy=ALL, another server gets selected.
506
507 The rampup and warmup features are only active for alt==0
508
509 • rampup slow start for servers which just went healthy
510
511 If alt==0 and the chosen backend is in its rampup period, with a
512 probability proportional to the fraction of time since the backup be‐
513 came healthy to the rampup period, return the next alternative back‐
514 end, unless this is also in its rampup period.
515
516 The default rampup interval can be set per shard director using the
517 xshard.set_rampup() method or specifically per backend with the
518 xshard.add_backend() method.
519
520 • warmup probabilistic alternative server selection
521
522 possible values: -1, 0..1
523
524 -1: use the warmup probability from the director definition
525
526 Only used for alt==0: Sets the ratio of requests (0.0 to 1.0) that
527 goes to the next alternate backend to warm it up when the preferred
528 backend is healthy. Not active if any of the preferred or alternative
529 backend are in rampup.
530
531 warmup=0.5 is a convenient way to spread the load for each key over
532 two backends under normal operating conditions.
533
534 • healthy
535
536 • CHOSEN: Return a healthy backend if possible.
537
538 For alt==0, return the first healthy backend or none.
539
540 For alt > 0, ignore the health state of backends skipped for alter‐
541 native backend selection, then return the next healthy backend. If
542 this does not exist, return the last healthy backend of those
543 skipped or none.
544
545 • IGNORE: Completely ignore backend health state
546
547 Just return the first or alt-th alternative backend, ignoring
548 health state, rampup and warmup.
549
550 • ALL: Check health state also for alternative backend selection
551
552 For alt > 0, return the alt-th alternative backend of all those
553 healthy, the last healthy backend found or none.
554
555 • resolve
556
557 default: LAZY in vcl_init{}, NOW otherwise
558
559 • NOW: look up a backend and return it.
560
561 Can not be used in vcl_init{}.
562
563 • LAZY: return an instance of this director for later backend resolu‐
564 tion.
565
566 LAZY mode is required for referencing shard director instances, for
567 example as backends for other directors (director layering).
568
569 In vcl_init{} and on the client side, LAZY mode can not be used
570 with any other argument.
571
572 On the backend side and in vcl_pipe {}, parameters from arguments
573 or an associated parameter set affect the shard director instance
574 for the backend request irrespective of where it is referenced.
575
576 • param
577
578 Use or associate a parameter set. The value of the param argument
579 must be a call to the xshard_param.use() method.
580
581 default: as set by xshard.associate() or unset.
582
583 • for resolve=NOW take parameter defaults from the
584 directors.shard_param() parameter set
585
586 • for resolve=LAZY associate the directors.shard_param() parameter
587 set for this backend request
588
589 Implementation notes for use of parameter sets with resolve=LAZY:
590
591 • A param argument remains associated and any changes to the asso‐
592 ciated parameter set affect the sharding decision once the direc‐
593 tor resolves to an actual backend.
594
595 • If other parameter arguments are also given, they have preference
596 and are kept even if the parameter set given by the param argu‐
597 ment is subsequently changed within the same backend request.
598
599 • Each call to xshard.backend() overrides any previous call.
600
601 VOID xshard.debug(INT)
602 intentionally undocumented
603
604 new xshard_param = directors.shard_param()
605 Create a shard parameter set.
606
607 A parameter set allows for re-use of xshard.backend() arguments across
608 many shard director instances and simplifies advanced use cases (e.g.
609 shard director with custom parameters layered below other directors).
610
611 Parameter sets have two scopes:
612
613 • per-VCL scope defined in vcl_init{}
614
615 • per backend request scope
616
617 The per-VCL scope defines defaults for the per backend scope. Any
618 changes to a parameter set in backend context and in vcl_pipe {} only
619 affect the respective backend request.
620
621 Parameter sets can not be used in client context except for vcl_pipe
622 {}.
623
624 The following example is a typical use case: A parameter set is associ‐
625 ated with several directors. Director choice happens on the client side
626 and parameters are changed on the backend side to implement retries on
627 alternative backends:
628
629 sub vcl_init {
630 new shard_param = directors.shard_param();
631
632 new dir_A = directors.shard();
633 dir_A.add_backend(...);
634 dir_A.reconfigure();
635 dir_A.associate(shard_param.use()); # <-- !
636
637 new dir_B = directors.shard();
638 dir_B.add_backend(...);
639 dir_B.reconfigure();
640 dir_B.associate(shard_param.use()); # <-- !
641 }
642
643 sub vcl_recv {
644 if (...) {
645 set req.backend_hint = dir_A.backend(resolve=LAZY);
646 } else {
647 set req.backend_hint = dir_B.backend(resolve=LAZY);
648 }
649 }
650
651 sub vcl_backend_fetch {
652 # changes dir_A and dir_B behaviour
653 shard_param.set(alt=bereq.retries, by=URL);
654 }
655
656 VOID xshard_param.clear()
657 Reset the parameter set to default values as documented for
658 xshard.backend().
659
660 • in vcl_init{}, resets the parameter set default for this VCL in
661
662 • backend context and in vcl_pipe {}, resets the parameter set for this
663 backend request to the VCL defaults
664
665 This method may not be used in client context other than vcl_pipe {}.
666
667 VOID xshard_param.set([ENUM by], [INT key], [BLOB key_blob], [INT alt],
668 [REAL warmup], [BOOL rampup], [ENUM healthy])
669 VOID xshard_param.set(
670 [ENUM {HASH, URL, KEY, BLOB} by],
671 [INT key],
672 [BLOB key_blob],
673 [INT alt],
674 [REAL warmup],
675 [BOOL rampup],
676 [ENUM {CHOSEN, IGNORE, ALL} healthy]
677 )
678
679 Change the given parameters of a parameter set as documented for
680 xshard.backend().
681
682 • in vcl_init{}, changes the parameter set default for this VCL
683
684 • in backend context and in vcl_pipe {}, changes the parameter set for
685 this backend request, keeping the defaults set for this VCL for un‐
686 specified arguments.
687
688 This method may not be used in client context other than vcl_pipe {}.
689
690 STRING xshard_param.get_by()
691 Get a string representation of the by enum argument which denotes how a
692 shard director using this parameter object would derive the shard key.
693 See xshard.backend().
694
695 INT xshard_param.get_key()
696 Get the key which a shard director using this parameter object would
697 use. See xshard.backend().
698
699 INT xshard_param.get_alt()
700 Get the alt parameter which a shard director using this parameter ob‐
701 ject would use. See xshard.backend().
702
703 REAL xshard_param.get_warmup()
704 Get the warmup parameter which a shard director using this parameter
705 object would use. See xshard.backend().
706
707 BOOL xshard_param.get_rampup()
708 Get the rampup parameter which a shard director using this parameter
709 object would use. See xshard.backend().
710
711 STRING xshard_param.get_healthy()
712 Get a string representation of the healthy enum argument which a shard
713 director using this parameter object would use. See xshard.backend().
714
715 BLOB xshard_param.use()
716 This method may only be used in backend context and in vcl_pipe {}.
717
718 For use with the param argument of xshard.backend() to associate this
719 shard parameter set with a shard director.
720
721 BACKEND lookup(STRING)
722 Lookup a backend by its name.
723
724 This function can only be used from vcl_init{} and vcl_fini{}.
725
727 Development of a previous version of the shard director was partly
728 sponsored by Deutsche Telekom AG - Products & Innovation.
729
730 Development of a previous version of the shard director was partly
731 sponsored by BILD GmbH & Co KG.
732
734 This document is licensed under the same licence as Varnish
735 itself. See LICENCE for details.
736
737 SPDX-License-Identifier: BSD-2-Clause
738
739 Copyright (c) 2013-2015 Varnish Software AS
740 Copyright 2009-2020 UPLEX - Nils Goroll Systemoptimierung
741 All rights reserved.
742
743 Authors: Poul-Henning Kamp <phk@FreeBSD.org>
744 Julian Wiesener <jw@uplex.de>
745 Nils Goroll <slink@uplex.de>
746 Geoffrey Simmons <geoff@uplex.de>
747
748 SPDX-License-Identifier: BSD-2-Clause
749
750 Redistribution and use in source and binary forms, with or without
751 modification, are permitted provided that the following conditions
752 are met:
753 1. Redistributions of source code must retain the above copyright
754 notice, this list of conditions and the following disclaimer.
755 2. Redistributions in binary form must reproduce the above copyright
756 notice, this list of conditions and the following disclaimer in the
757 documentation and/or other materials provided with the distribution.
758
759 THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
760 ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
761 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
762 ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
763 FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
764 DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
765 OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
766 HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
767 LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
768 OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
769 SUCH DAMAGE.
770
771
772
773
774 VMOD_DIRECTORS(3)