1VMOD(DIRECTORS)                                                VMOD(DIRECTORS)
2
3
4

NAME

6       VMOD directors - Varnish Directors Module
7

SYNOPSIS

9          import directors [as name] [from "path"]
10
11          new xround_robin = directors.round_robin()
12
13              VOID xround_robin.add_backend(BACKEND)
14
15              VOID xround_robin.remove_backend(BACKEND)
16
17              BACKEND xround_robin.backend()
18
19          new xfallback = directors.fallback(BOOL sticky)
20
21              VOID xfallback.add_backend(BACKEND)
22
23              VOID xfallback.remove_backend(BACKEND)
24
25              BACKEND xfallback.backend()
26
27          new xrandom = directors.random()
28
29              VOID xrandom.add_backend(BACKEND, REAL)
30
31              VOID xrandom.remove_backend(BACKEND)
32
33              BACKEND xrandom.backend()
34
35          new xhash = directors.hash()
36
37              VOID xhash.add_backend(BACKEND, REAL)
38
39              VOID xhash.remove_backend(BACKEND)
40
41              BACKEND xhash.backend(STRING)
42
43          new xshard = directors.shard()
44
45              VOID xshard.set_warmup(REAL probability)
46
47              VOID xshard.set_rampup(DURATION duration)
48
49              VOID xshard.associate(BLOB param)
50
51              BOOL xshard.add_backend(BACKEND backend, [STRING ident], [DURATION rampup])
52
53              BOOL xshard.remove_backend([BACKEND backend], [STRING ident])
54
55              BOOL xshard.clear()
56
57              BOOL xshard.reconfigure(INT replicas)
58
59              INT xshard.key(STRING)
60
61              BACKEND xshard.backend([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy], [BLOB param], [ENUM resolve])
62
63              VOID xshard.debug(INT)
64
65          new xshard_param = directors.shard_param()
66
67              VOID xshard_param.clear()
68
69              VOID xshard_param.set([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy])
70
71              STRING xshard_param.get_by()
72
73              INT xshard_param.get_key()
74
75              INT xshard_param.get_alt()
76
77              REAL xshard_param.get_warmup()
78
79              BOOL xshard_param.get_rampup()
80
81              STRING xshard_param.get_healthy()
82
83              BLOB xshard_param.use()
84
85          BACKEND lookup(STRING)
86

DESCRIPTION

88       vmod_directors enables backend load balancing in Varnish.
89
90       The  module implements load balancing techniques, and also serves as an
91       example on how one could extend the load balancing capabilities of Var‐
92       nish.
93
94       To enable load balancing you must import this vmod (directors).
95
96       Then  you define your backends. Once you have the backends declared you
97       can add them to a director. This happens in executed VCL code.  If  you
98       want  to emulate the previous behavior of Varnish 3.0 you can just ini‐
99       tialize the directors in vcl_init{}, like this:
100
101          sub vcl_init {
102              new vdir = directors.round_robin();
103              vdir.add_backend(backend1);
104              vdir.add_backend(backend2);
105          }
106
107       As you can see there is  nothing  keeping  you  from  manipulating  the
108       directors  elsewhere in VCL. So, you could have VCL code that would add
109       more backends to a director when a certain URL is called.
110
111       Note that directors can use other directors as backends.
112
113   new xround_robin = directors.round_robin()
114       Create a round robin director.
115
116       This director will pick backends in a round robin fashion.
117
118       Example:
119
120          new vdir = directors.round_robin();
121
122   VOID xround_robin.add_backend(BACKEND)
123       Add a backend to the round-robin director.
124
125       Example:
126
127          vdir.add_backend(backend1);
128
129   VOID xround_robin.remove_backend(BACKEND)
130       Remove a backend from the round-robin director.
131
132       Example:
133
134          vdir.remove_backend(backend1);
135
136   BACKEND xround_robin.backend()
137       Pick a backend from the director.
138
139       Example:
140
141          set req.backend_hint = vdir.backend();
142
143   new xfallback = directors.fallback(BOOL sticky=0)
144       Create a fallback director.
145
146       A fallback director will try each of the added backends  in  turn,  and
147       return the first one that is healthy.
148
149       If  sticky  is  set  to  true, the director will keep using the healthy
150       backend, even if a higher-priority backend becomes available. Once  the
151       whole backend list is exhausted, it'll start over at the beginning.
152
153       Example:
154
155          new vdir = directors.fallback();
156
157   VOID xfallback.add_backend(BACKEND)
158       Add a backend to the director.
159
160       Note  that  the  order  in  which this is done matters for the fallback
161       director.
162
163       Example:
164
165          vdir.add_backend(backend1);
166
167   VOID xfallback.remove_backend(BACKEND)
168       Remove a backend from the director.
169
170       Example:
171
172          vdir.remove_backend(backend1);
173
174   BACKEND xfallback.backend()
175       Pick a backend from the director.
176
177       Example:
178
179          set req.backend_hint = vdir.backend();
180
181   new xrandom = directors.random()
182       Create a random backend director.
183
184       The random director distributes load over the backends using a weighted
185       random probability distribution.
186
187       The  "testable"  random  generator  in  varnishd is used, which enables
188       deterministic tests to be run (See: d00004.vtc).
189
190       Example:
191
192          new vdir = directors.random();
193
194   VOID xrandom.add_backend(BACKEND, REAL)
195       Add a backend to the director with a given weight.
196
197       Each   backend   will   receive   approximately   100   *   (weight   /
198       (sum(all_added_weights)))  per  cent of the traffic sent to this direc‐
199       tor.
200
201       Example:
202
203          # 2/3 to backend1, 1/3 to backend2.
204          vdir.add_backend(backend1, 10.0);
205          vdir.add_backend(backend2, 5.0);
206
207   VOID xrandom.remove_backend(BACKEND)
208       Remove a backend from the director.
209
210       Example:
211
212          vdir.remove_backend(backend1);
213
214   BACKEND xrandom.backend()
215       Pick a backend from the director.
216
217       Example:
218
219          set req.backend_hint = vdir.backend();
220
221   new xhash = directors.hash()
222       Create a hashing backend director.
223
224       The director chooses the backend server by computing a  hash/digest  of
225       the string given to xhash.backend().
226
227       Commonly  used  with  client.ip  or a session cookie to get sticky ses‐
228       sions.
229
230       Example:
231
232          new vdir = directors.hash();
233
234   VOID xhash.add_backend(BACKEND, REAL)
235       Add a backend to the director with a certain weight.
236
237       Weight is used as in the random  director.  Recommended  value  is  1.0
238       unless you have special needs.
239
240       Example:
241
242          vdir.add_backend(backend1, 1.0);
243
244   VOID xhash.remove_backend(BACKEND)
245       Remove a backend from the director.
246
247       Example::
248              vdir.remove_backend(backend1);
249
250   BACKEND xhash.backend(STRING)
251       Pick a backend from the backend director.
252
253       Use the string or list of strings provided to pick the backend.
254
255       Example::
256              #  pick a backend based on the cookie header from the client set
257              req.backend_hint = vdir.backend(req.http.cookie);
258
259   new xshard = directors.shard()
260       Create a shard director.
261
262       Note that the shard director needs to be configured using at least  one
263       xshard.add_backend()  call(s)  followed  by a xshard.reconfigure() call
264       before it can hand out backends.
265
266       _Note_ that due to various restrictions (documented below), it is  rec‐
267       ommended to use the shard director on the backend side.
268
269   Introduction
270       The  shard  director  selects  backends by a key, which can be provided
271       directly or derived from strings. For the same key, the shard  director
272       will  always  return the same backend, unless the backend configuration
273       or health state changes. Conversely,  for  differing  keys,  the  shard
274       director will likely choose different backends. In the default configu‐
275       ration, unhealthy backends are not selected.
276
277       The shard director resembles the hash director, but its main  advantage
278       is  that,  when  the backend configuration or health states change, the
279       association of keys to backends remains as stable as possible.
280
281       In addition, the rampup and warmup features can help to further improve
282       user-perceived response times.
283
284   Sharding
285       This  basic  technique allows for numerous applications like optimizing
286       backend server cache efficiency, Varnish clustering or persisting  ses‐
287       sions to servers without keeping any state, and, in particular, without
288       the need to synchronize state between nodes of  a  cluster  of  Varnish
289       servers:
290
291       · Many  applications  use  caches for data objects, so, in a cluster of
292         application servers, requesting similar objects from the same  server
293         may help to optimize efficiency of such caches.
294
295         For example, sharding by URL or some id component of the url has been
296         shown to drastically improve the efficiency of many  content  manage‐
297         ment systems.
298
299       · As  special  case  of  the  previous  example, in clusters of Varnish
300         servers without additional request  distribution  logic,  each  cache
301         will  need  store  all  hot  objects,  so the effective cache size is
302         approximately the smallest cache size of any server in the cluster.
303
304         Sharding allows to segregate objects within  the  cluster  such  that
305         each  object  is only cached on one of the servers (or on one primary
306         and one backup, on a primary for long and others for  short  etc...).
307         Effectively,  this  will lead to a cache size in the order of the sum
308         of all individual caches, with the potential to drastically  increase
309         efficiency (scales by the number of servers).
310
311       · Another  application is to implement persistence of backend requests,
312         such that all requests sharing a certain criterion  (such  as  an  IP
313         address or session ID) get forwarded to the same backend server.
314
315       When used with clusters of varnish servers, the shard director will, if
316       otherwise configured equally, make the same decision on all servers. In
317       other  words, requests sharing a common criterion used as the shard key
318       will be balanced onto the same backend server(s) no matter  which  Var‐
319       nish server handles the request.
320
321       The drawbacks are:
322
323       · the  distribution  of  requests depends on the number of requests per
324         key and the uniformity of the distribution of key values.  In  short,
325         while  this  technique may lead to much better efficiency overall, it
326         may also lead to less good load balancing for specific cases.
327
328       · When a backend server becomes unavailable,  every  persistence  tech‐
329         nique  has  to reselect a new backend server, but this technique will
330         also switch back to the preferred  server  once  it  becomes  healthy
331         again, so when used for persistence, it is generally less stable com‐
332         pared to stateful techniques (which would continue to use a  selected
333         server for as long as possible (or dictated by a TTL)).
334
335   Method
336       When xshard.reconfigure() is called, a consistent hashing circular data
337       structure gets built from the last 32 bits of  SHA256  hash  values  of
338       <ident><n>  (default ident being the backend name) for each backend and
339       for a running number n from 1 to replicas. Hashing  creates  the  seem‐
340       ingly  random order for placement of backends on the consistent hashing
341       ring.
342
343       When xshard.backend() is called, a load balancing  key  gets  generated
344       unless  provided.  The  smallest  hash value in the circle is looked up
345       that is larger than the key (searching clockwise and wrapping around as
346       necessary).  The  backend  for this hash value is the preferred backend
347       for the given key.
348
349       If a healthy backend is requested, the search is continued linearly  on
350       the  ring  as long as backends found are unhealthy or all backends have
351       been checked. The order of these "alternative backends" on the ring  is
352       likely  to  differ for different keys. Alternative backends can also be
353       selected explicitly.
354
355       On consistent hashing see:
356
357       · http://www8.org/w8-papers/2a-webserver/caching/paper2.html
358
359       · http://www.audioscrobbler.net/development/ketama/
360
361       · svn://svn.audioscrobbler.net/misc/ketama
362
363       · http://en.wikipedia.org/wiki/Consistent_hashing
364
365   Error Reporting
366       Failing methods should report errors to VSL with the Error tag, so when
367       configuring the shard director, you are advised to check:
368
369          varnishlog -I Error:^shard
370
371   VOID xshard.set_warmup(REAL probability=0.0)
372       Set  the  default  warmup  probability.  See  the  warmup  parameter of
373       xshard.backend(). If probability is 0.0 (default), warmup is disabled.
374
375   VOID xshard.set_rampup(DURATION duration=0)
376       Set  the   default   rampup   duration.   See   rampup   parameter   of
377       xshard.backend(). If duration is 0 (default), rampup is disabled.
378
379   VOID xshard.associate(BLOB param=0)
380       Associate a default directors.shard_param() object or clear an associa‐
381       tion.
382
383       The  value  of  the  param   argument   must   be   a   call   to   the
384       xshard_param.use() method. No argument clears the association.
385
386       The  association  can  be  changed  per backend request using the param
387       argument of xshard.backend().
388
389   BOOL xshard.add_backend(BACKEND backend, [STRING ident], [DURATION rampup])
390          BOOL xshard.add_backend(
391                BACKEND backend,
392                [STRING ident],
393                [DURATION rampup]
394          )
395
396       Add a backend backend to the director.
397
398       ident: Optionally specify an identification string  for  this  backend,
399       which  will  be hashed by xshard.reconfigure() to construct the consis‐
400       tent hashing ring. The identification string defaults  to  the  backend
401       name.
402
403       ident allows to add multiple instances of the same backend.
404
405       rampup:  Optionally  specify  a  specific rampup time for this backend.
406       Otherwise,   the   per-director    rampup    time    is    used    (see
407       xshard.set_rampup()).
408
409       NOTE:  Backend  changes  need to be finalized with xshard.reconfigure()
410       and are only supported on one shard director at a time.
411
412   BOOL xshard.remove_backend([BACKEND backend], [STRING ident])
413          BOOL xshard.remove_backend(
414                [BACKEND backend=0],
415                [STRING ident=0]
416          )
417
418       Remove backend(s) from the director. Either backend or  ident  must  be
419       specified. ident removes a specific instance. If backend is given with‐
420       out ident, all instances of this backend are removed.
421
422       NOTE: Backend changes need to be  finalized  with  xshard.reconfigure()
423       and are only supported on one shard director at a time.
424
425   BOOL xshard.clear()
426       Remove all backends from the director.
427
428       NOTE:  Backend  changes  need to be finalized with xshard.reconfigure()
429       and are only supported on one shard director at a time.
430
431   BOOL xshard.reconfigure(INT replicas=67)
432       Reconfigure the consistent hashing ring to reflect backend changes.
433
434       This method must be called at least once before  the  director  can  be
435       used.
436
437   INT xshard.key(STRING)
438       Convenience  method  to  generate  a  sharding key for use with the key
439       argument to the xshard.backend() method by  hashing  the  given  string
440       with SHA256.
441
442       To  generate  sharding  keys using other hashes, use a custom vmod like
443       vmod blobdigest with the  key_blob  argument  of  the  xshard.backend()
444       method.
445
446   BACKEND  xshard.backend([ENUM  by],  [INT key], [BLOB key_blob], [INT alt],
447       [REAL warmup], [BOOL  rampup],  [ENUM  healthy],  [BLOB  param],  [ENUM
448       resolve])
449          BACKEND xshard.backend(
450                [ENUM {HASH, URL, KEY, BLOB} by=HASH],
451                [INT key],
452                [BLOB key_blob],
453                [INT alt=0],
454                [REAL warmup=-1],
455                [BOOL rampup=1],
456                [ENUM {CHOSEN, IGNORE, ALL} healthy=CHOSEN],
457                [BLOB param],
458                [ENUM {NOW, LAZY} resolve]
459          )
460
461       Lookup a backend on the consistent hashing ring.
462
463       This  documentation  uses the notion of an order of backends for a par‐
464       ticular shard key. This order is deterministic but seemingly random  as
465       determined  by the consistent hashing algorithm and is likely to differ
466       for different keys, depending on the number of backends and the  number
467       of replicas. In particular, the backend order referred to here is _not_
468       the order given when backends are added.
469
470       · by how to determine the sharding key
471
472         · HASH:
473
474           · when called in backend context: Use the varnish hash value as set
475             by vcl_hash{}
476
477           · when called in client context: hash req.url
478
479         · URL: hash req.url / bereq.url
480
481         · KEY: use the key argument
482
483         · BLOB: use the key_blob argument
484
485       · key lookup key with by=KEY
486
487         the  xshard.key()  method  may  come handy to generate a sharding key
488         from custom strings.
489
490       · key_blob lookup key with by=BLOB
491
492         Currently, this uses the first 4 bytes from the given blob in network
493         byte  order  (big  endian),  left-padded with zeros for blobs smaller
494         than 4 bytes.
495
496       · alt alternative backend selection
497
498         Select the alt-th alternative backend for the given key.
499
500         This is particularly useful for retries /  restarts  due  to  backend
501         errors:   By   setting  alt=req.restarts  or  alt=bereq.retries  with
502         healthy=ALL, another server gets selected.
503
504         The rampup and warmup features are only active for alt==0
505
506       · rampup slow start for servers which just went healthy
507
508         If alt==0 and the chosen backend is in  its  rampup  period,  with  a
509         probability  proportional  to  the  fraction of time since the backup
510         became healthy to the rampup  period,  return  the  next  alternative
511         backend, unless this is also in its rampup period.
512
513         The  default  rampup interval can be set per shard director using the
514         xshard.set_rampup() method  or  specifically  per  backend  with  the
515         xshard.add_backend() method.
516
517       · warmup probabilistic alternative server selection
518
519         possible values: -1, 0..1
520
521         -1: use the warmup probability from the director definition
522
523         Only  used  for  alt==0: Sets the ratio of requests (0.0 to 1.0) that
524         goes to the next alternate backend to warm it up when  the  preferred
525         backend is healthy. Not active if any of the preferred or alternative
526         backend are in rampup.
527
528         warmup=0.5 is a convenient way to spread the load for each  key  over
529         two backends under normal operating conditions.
530
531       · healthy
532
533         · CHOSEN: Return a healthy backend if possible.
534
535           For alt==0, return the first healthy backend or none.
536
537           For alt > 0, ignore the health state of backends skipped for alter‐
538           native backend selection, then return the next healthy backend.  If
539           this  does  not  exist,  return  the  last healthy backend of those
540           skipped or none.
541
542         · IGNORE: Completely ignore backend health state
543
544           Just return the  first  or  alt-th  alternative  backend,  ignoring
545           health state, rampup and warmup.
546
547         · ALL: Check health state also for alternative backend selection
548
549           For  alt  >  0,  return the alt-th alternative backend of all those
550           healthy, the last healthy backend found or none.
551
552       · resolve
553
554         default: LAZY in vcl_init{}, NOW otherwise
555
556         · NOW: look up a backend and return it.
557
558           Can not be used in vcl_init{}.
559
560         · LAZY: return an instance of this director for later backend resolu‐
561           tion.
562
563           LAZY mode is required for referencing shard director instances, for
564           example as backends for other directors (director layering).
565
566           In vcl_init{} and on the client side, LAZY mode  can  not  be  used
567           with any other argument.
568
569           On  the  backend  side,  parameters from arguments or an associated
570           parameter set affect the shard director instance  for  the  backend
571           request irrespective of where it is referenced.
572
573       · param
574
575         Use  or  associate  a  parameter set. The value of the param argument
576         must be a call to the xshard_param.use() method.
577
578         default: as set by xshard.associate() or unset.
579
580         · for    resolve=NOW    take    parameter    defaults    from     the
581           directors.shard_param() parameter set
582
583         · for  resolve=LAZY  associate  the directors.shard_param() parameter
584           set for this backend request
585
586           Implementation notes for use of parameter sets with resolve=LAZY:
587
588           · A param argument remains associated and any changes to the  asso‐
589             ciated parameter set affect the sharding decision once the direc‐
590             tor resolves to an actual backend.
591
592           · If other parameter arguments are also given, they have preference
593             and  are  kept even if the parameter set given by the param argu‐
594             ment is subsequently changed within the same backend request.
595
596           · Each call to xshard.backend() overrides any previous call.
597
598   VOID xshard.debug(INT)
599       intentionally undocumented
600
601   new xshard_param = directors.shard_param()
602       Create a shard parameter set.
603
604       A parameter set allows for re-use of xshard.backend() arguments  across
605       many  shard  director instances and simplifies advanced use cases (e.g.
606       shard director with custom parameters layered below other directors).
607
608       Parameter sets have two scopes:
609
610       · per-VCL scope defined in vcl_init{}
611
612       · per backend request scope
613
614       The per-VCL scope defines defaults  for  the  per  backend  scope.  Any
615       changes  to  a parameter set in backend context only affect the respec‐
616       tive backend request.
617
618       Parameter sets can not be used in client context.
619
620       The following example is a typical use case: A parameter set is associ‐
621       ated with several directors. Director choice happens on the client side
622       and parameters are changed on the backend side to implement retries  on
623       alternative backends:
624
625          sub vcl_init {
626            new shard_param = directors.shard_param();
627
628            new dir_A = directors.shard();
629            dir_A.add_backend(...);
630            dir_A.reconfigure(shard_param);
631            dir_A.associate(shard_param.use()); # <-- !
632
633            new dir_B = directors.shard();
634            dir_B.add_backend(...);
635            dir_B.reconfigure(shard_param);
636            dir_B.associate(shard_param.use()); # <-- !
637          }
638
639          sub vcl_recv {
640            if (...) {
641              set req.backend_hint = dir_A.backend(resolve=LAZY);
642            } else {
643              set req.backend_hint = dir_B.backend(resolve=LAZY);
644            }
645          }
646
647          sub vcl_backend_fetch {
648            # changes dir_A and dir_B behaviour
649            shard_param.set(alt=bereq.retries);
650          }
651
652   VOID xshard_param.clear()
653       Reset   the   parameter   set  to  default  values  as  documented  for
654       xshard.backend().
655
656       · in vcl_init{}, resets the parameter set default for this VCL
657
658       · in backend context, resets the parameter set for this backend request
659         to the VCL defaults
660
661       This method may not be used in client context
662
663   VOID  xshard_param.set([ENUM  by],  [INT  key], [BLOB key_blob], [INT alt],
664       [REAL warmup], [BOOL rampup], [ENUM healthy])
665          VOID xshard_param.set(
666                [ENUM {HASH, URL, KEY, BLOB} by],
667                [INT key],
668                [BLOB key_blob],
669                [INT alt],
670                [REAL warmup],
671                [BOOL rampup],
672                [ENUM {CHOSEN, IGNORE, ALL} healthy]
673          )
674
675       Change the given parameters  of  a  parameter  set  as  documented  for
676       xshard.backend().
677
678       · in vcl_init{}, changes the parameter set default for this VCL
679
680       · in  backend  context,  changes  the  parameter  set  for this backend
681         request, keeping the defaults set for this VCL for unspecified  argu‐
682         ments.
683
684       This method may not be used in client context
685
686   STRING xshard_param.get_by()
687       Get a string representation of the by enum argument which denotes how a
688       shard director using this parameter object would derive the shard  key.
689       See xshard.backend().
690
691   INT xshard_param.get_key()
692       Get  the  key  which a shard director using this parameter object would
693       use. See xshard.backend().
694
695   INT xshard_param.get_alt()
696       Get the alt parameter which  a  shard  director  using  this  parameter
697       object would use. See xshard.backend().
698
699   REAL xshard_param.get_warmup()
700       Get  the  warmup  parameter which a shard director using this parameter
701       object would use. See xshard.backend().
702
703   BOOL xshard_param.get_rampup()
704       Get the rampup parameter which a shard director  using  this  parameter
705       object would use. See xshard.backend().
706
707   STRING xshard_param.get_healthy()
708       Get  a string representation of the healthy enum argument which a shard
709       director using this parameter object would use. See xshard.backend().
710
711   BLOB xshard_param.use()
712       This method may only be used in backend context.
713
714       For use with the param argument of xshard.backend() to  associate  this
715       shard parameter set with a shard director.
716
717   BACKEND lookup(STRING)
718       Lookup a backend by its name.
719
720       This function can only be used from vcl_init{} and  vcl_fini{}.
721

ACKNOWLEDGEMENTS

723       Development  of  a  previous  version  of the shard director was partly
724       sponsored by Deutsche Telekom AG - Products & Innovation.
725
726       Development of a previous version of  the  shard  director  was  partly
727       sponsored by BILD GmbH & Co KG.
728
730          This document is licensed under the same licence as Varnish
731          itself. See LICENCE for details.
732
733          Copyright (c) 2013-2015 Varnish Software AS
734          Copyright 2009-2018 UPLEX - Nils Goroll Systemoptimierung
735          All rights reserved.
736
737          Authors: Poul-Henning Kamp <phk@FreeBSD.org>
738                   Julian Wiesener <jw@uplex.de>
739                   Nils Goroll <slink@uplex.de>
740                   Geoffrey Simmons <geoff@uplex.de>
741
742          Redistribution and use in source and binary forms, with or without
743          modification, are permitted provided that the following conditions
744          are met:
745          1. Redistributions of source code must retain the above copyright
746             notice, this list of conditions and the following disclaimer.
747          2. Redistributions in binary form must reproduce the above copyright
748             notice, this list of conditions and the following disclaimer in the
749             documentation and/or other materials provided with the distribution.
750
751          THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
752          ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
753          IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
754          ARE DISCLAIMED.  IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
755          FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
756          DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
757          OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
758          HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
759          LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
760          OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
761          SUCH DAMAGE.
762
763
764
765
766                                       3                       VMOD(DIRECTORS)
Impressum