1VMOD_DIRECTORS(3)                                            VMOD_DIRECTORS(3)
2
3
4

NAME

6       vmod_directors - Varnish Directors Module
7

SYNOPSIS

9          import directors [as name] [from "path"]
10
11          new xround_robin = directors.round_robin()
12
13              VOID xround_robin.add_backend(BACKEND)
14
15              VOID xround_robin.remove_backend(BACKEND)
16
17              BACKEND xround_robin.backend()
18
19          new xfallback = directors.fallback(BOOL sticky)
20
21              VOID xfallback.add_backend(BACKEND)
22
23              VOID xfallback.remove_backend(BACKEND)
24
25              BACKEND xfallback.backend()
26
27          new xrandom = directors.random()
28
29              VOID xrandom.add_backend(BACKEND, REAL)
30
31              VOID xrandom.remove_backend(BACKEND)
32
33              BACKEND xrandom.backend()
34
35          new xhash = directors.hash()
36
37              VOID xhash.add_backend(BACKEND, REAL)
38
39              VOID xhash.remove_backend(BACKEND)
40
41              BACKEND xhash.backend(STRING)
42
43          new xshard = directors.shard()
44
45              VOID xshard.set_warmup(REAL probability)
46
47              VOID xshard.set_rampup(DURATION duration)
48
49              VOID xshard.associate(BLOB param)
50
51              BOOL xshard.add_backend(BACKEND backend, [STRING ident], [DURATION rampup], [REAL weight])
52
53              BOOL xshard.remove_backend([BACKEND backend], [STRING ident])
54
55              BOOL xshard.clear()
56
57              BOOL xshard.reconfigure(INT replicas)
58
59              INT xshard.key(STRING)
60
61              BACKEND xshard.backend([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy], [BLOB param], [ENUM resolve])
62
63              VOID xshard.debug(INT)
64
65          new xshard_param = directors.shard_param()
66
67              VOID xshard_param.clear()
68
69              VOID xshard_param.set([ENUM by], [INT key], [BLOB key_blob], [INT alt], [REAL warmup], [BOOL rampup], [ENUM healthy])
70
71              STRING xshard_param.get_by()
72
73              INT xshard_param.get_key()
74
75              INT xshard_param.get_alt()
76
77              REAL xshard_param.get_warmup()
78
79              BOOL xshard_param.get_rampup()
80
81              STRING xshard_param.get_healthy()
82
83              BLOB xshard_param.use()
84
85          BACKEND lookup(STRING)
86

DESCRIPTION

88       vmod_directors enables backend load balancing in Varnish.
89
90       The  module implements load balancing techniques, and also serves as an
91       example on how one could extend the load balancing capabilities of Var‐
92       nish.
93
94       To enable load balancing you must import this vmod (directors).
95
96       Then  you define your backends. Once you have the backends declared you
97       can add them to a director. This happens in executed VCL code.  If  you
98       want  to emulate the previous behavior of Varnish 3.0 you can just ini‐
99       tialize the directors in vcl_init{}, like this:
100
101          sub vcl_init {
102              new vdir = directors.round_robin();
103              vdir.add_backend(backend1);
104              vdir.add_backend(backend2);
105          }
106
107       As you can see there is nothing keeping you from manipulating  the  di‐
108       rectors  elsewhere  in  VCL. So, you could have VCL code that would add
109       more backends to a director when a certain URL is called.
110
111       Note that directors can use other directors as backends.
112
113   new xround_robin = directors.round_robin()
114       Create a round robin director.
115
116       This director will pick backends in a round robin fashion.
117
118       Example:
119
120          new vdir = directors.round_robin();
121
122   VOID xround_robin.add_backend(BACKEND)
123       Add a backend to the round-robin director.
124
125       Example:
126
127          vdir.add_backend(backend1);
128
129   VOID xround_robin.remove_backend(BACKEND)
130       Remove a backend from the round-robin director.
131
132       Example:
133
134          vdir.remove_backend(backend1);
135
136   BACKEND xround_robin.backend()
137       Pick a backend from the director.
138
139       Example:
140
141          set req.backend_hint = vdir.backend();
142
143   new xfallback = directors.fallback(BOOL sticky=0)
144       Create a fallback director.
145
146       A fallback director will try each of the added backends  in  turn,  and
147       return the first one that is healthy.
148
149       If  sticky  is  set  to  true, the director will keep using the healthy
150       backend, even if a higher-priority backend becomes available. Once  the
151       whole backend list is exhausted, it'll start over at the beginning.
152
153       Example:
154
155          new vdir = directors.fallback();
156
157   VOID xfallback.add_backend(BACKEND)
158       Add a backend to the director.
159
160       Note  that the order in which this is done matters for the fallback di‐
161       rector.
162
163       Example:
164
165          vdir.add_backend(backend1);
166
167   VOID xfallback.remove_backend(BACKEND)
168       Remove a backend from the director.
169
170       Example:
171
172          vdir.remove_backend(backend1);
173
174   BACKEND xfallback.backend()
175       Pick a backend from the director.
176
177       Example:
178
179          set req.backend_hint = vdir.backend();
180
181   new xrandom = directors.random()
182       Create a random backend director.
183
184       The random director distributes load over the backends using a weighted
185       random probability distribution.
186
187       The  "testable" random generator in varnishd is used, which enables de‐
188       terministic tests to be run (See: d00004.vtc).
189
190       Example:
191
192          new vdir = directors.random();
193
194   VOID xrandom.add_backend(BACKEND, REAL)
195       Add a backend to the director with a given weight.
196
197       Each   backend   will   receive   approximately   100   *   (weight   /
198       (sum(all_added_weights)))  per  cent of the traffic sent to this direc‐
199       tor.
200
201       Example:
202
203          # 2/3 to backend1, 1/3 to backend2.
204          vdir.add_backend(backend1, 10.0);
205          vdir.add_backend(backend2, 5.0);
206
207   VOID xrandom.remove_backend(BACKEND)
208       Remove a backend from the director.
209
210       Example:
211
212          vdir.remove_backend(backend1);
213
214   BACKEND xrandom.backend()
215       Pick a backend from the director.
216
217       Example:
218
219          set req.backend_hint = vdir.backend();
220
221   new xhash = directors.hash()
222       Create a hashing backend director.
223
224       The director chooses the backend server by computing a  hash/digest  of
225       the string given to xhash.backend().
226
227       Commonly  used  with  client.ip  or a session cookie to get sticky ses‐
228       sions.
229
230       Example:
231
232          new vdir = directors.hash();
233
234   VOID xhash.add_backend(BACKEND, REAL)
235       Add a backend to the director with a certain weight.
236
237       Weight is used as in the random director. Recommended value is 1.0  un‐
238       less you have special needs.
239
240       Example:
241
242          vdir.add_backend(backend1, 1.0);
243
244   VOID xhash.remove_backend(BACKEND)
245       Remove a backend from the director.
246
247       Example::
248              vdir.remove_backend(backend1);
249
250   BACKEND xhash.backend(STRING)
251       Pick a backend from the backend director.
252
253       Use the string or list of strings provided to pick the backend.
254
255       Example::
256              #  pick a backend based on the cookie header from the client set
257              req.backend_hint = vdir.backend(req.http.cookie);
258
259   new xshard = directors.shard()
260       Create a shard director.
261
262       Note that the shard director needs to be configured using at least  one
263       xshard.add_backend()  call(s)  followed  by a xshard.reconfigure() call
264       before it can hand out backends.
265
266       _Note_ that due to various restrictions (documented below), it is  rec‐
267       ommended to use the shard director on the backend side.
268
269   Introduction
270       The shard director selects backends by a key, which can be provided di‐
271       rectly or derived from strings. For the same key,  the  shard  director
272       will  always  return the same backend, unless the backend configuration
273       or health state changes. Conversely, for differing keys, the shard  di‐
274       rector will likely choose different backends. In the default configura‐
275       tion, unhealthy backends are not selected.
276
277       The shard director resembles the hash director, but its main  advantage
278       is  that,  when  the backend configuration or health states change, the
279       association of keys to backends remains as stable as possible.
280
281       In addition, the rampup and warmup features can help to further improve
282       user-perceived response times.
283
284   Sharding
285       This  basic  technique allows for numerous applications like optimizing
286       backend server cache efficiency, Varnish clustering or persisting  ses‐
287       sions to servers without keeping any state, and, in particular, without
288       the need to synchronize state between nodes of  a  cluster  of  Varnish
289       servers:
290
291       • Many  applications  use  caches for data objects, so, in a cluster of
292         application servers, requesting similar objects from the same  server
293         may help to optimize efficiency of such caches.
294
295         For example, sharding by URL or some id component of the url has been
296         shown to drastically improve the efficiency of many  content  manage‐
297         ment systems.
298
299       • As  special  case  of  the  previous  example, in clusters of Varnish
300         servers without additional request  distribution  logic,  each  cache
301         will  need  store all hot objects, so the effective cache size is ap‐
302         proximately the smallest cache size of any server in the cluster.
303
304         Sharding allows to segregate objects within  the  cluster  such  that
305         each  object  is only cached on one of the servers (or on one primary
306         and one backup, on a primary for long and others for  short  etc...).
307         Effectively,  this  will lead to a cache size in the order of the sum
308         of all individual caches, with the potential to drastically  increase
309         efficiency (scales by the number of servers).
310
311       • Another  application is to implement persistence of backend requests,
312         such that all requests sharing a certain criterion (such as an IP ad‐
313         dress or session ID) get forwarded to the same backend server.
314
315       When used with clusters of varnish servers, the shard director will, if
316       otherwise configured equally, make the same decision on all servers. In
317       other  words, requests sharing a common criterion used as the shard key
318       will be balanced onto the same backend server(s) no matter  which  Var‐
319       nish server handles the request.
320
321       The drawbacks are:
322
323       • the  distribution  of  requests depends on the number of requests per
324         key and the uniformity of the distribution of key values.  In  short,
325         while  this  technique may lead to much better efficiency overall, it
326         may also lead to less good load balancing for specific cases.
327
328       • When a backend server becomes unavailable,  every  persistence  tech‐
329         nique  has  to reselect a new backend server, but this technique will
330         also switch back to the preferred  server  once  it  becomes  healthy
331         again, so when used for persistence, it is generally less stable com‐
332         pared to stateful techniques (which would continue to use a  selected
333         server for as long as possible (or dictated by a TTL)).
334
335   Method
336       When xshard.reconfigure() is called, a consistent hashing circular data
337       structure gets built from the last 32 bits of  SHA256  hash  values  of
338       <ident><n>  (default ident being the backend name) for each backend and
339       for a running number n from 1 to replicas. Hashing  creates  the  seem‐
340       ingly  random order for placement of backends on the consistent hashing
341       ring. When xshard.add_backend() was  called  with  a  weight  argument,
342       replicas  is scaled by that weight to add proportionally more copies of
343       the that backend on the ring.
344
345       When xshard.backend() is called, a load balancing  key  gets  generated
346       unless  provided.  The  smallest  hash value in the circle is looked up
347       that is larger than the key (searching clockwise and wrapping around as
348       necessary).  The  backend  for this hash value is the preferred backend
349       for the given key.
350
351       If a healthy backend is requested, the search is continued linearly  on
352       the  ring  as long as backends found are unhealthy or all backends have
353       been checked. The order of these "alternative backends" on the ring  is
354       likely  to  differ for different keys. Alternative backends can also be
355       selected explicitly.
356
357       On consistent hashing see:
358
359http://www8.org/w8-papers/2a-webserver/caching/paper2.html
360
361http://www.audioscrobbler.net/development/ketama/
362
363       • svn://svn.audioscrobbler.net/misc/ketama
364
365http://en.wikipedia.org/wiki/Consistent_hashing
366
367   Error Reporting
368       Failing methods should report errors to VSL with the Error tag, so when
369       configuring the shard director, you are advised to check:
370
371          varnishlog -I Error:^shard
372
373   VOID xshard.set_warmup(REAL probability=0.0)
374       Set  the  default  warmup  probability.  See  the  warmup  parameter of
375       xshard.backend(). If probability is 0.0 (default), warmup is disabled.
376
377   VOID xshard.set_rampup(DURATION duration=0)
378       Set  the   default   rampup   duration.   See   rampup   parameter   of
379       xshard.backend(). If duration is 0 (default), rampup is disabled.
380
381   VOID xshard.associate(BLOB param=0)
382       Associate a default directors.shard_param() object or clear an associa‐
383       tion.
384
385       The  value  of  the  param   argument   must   be   a   call   to   the
386       xshard_param.use() method. No argument clears the association.
387
388       The  association can be changed per backend request using the param ar‐
389       gument of xshard.backend().
390
391   BOOL xshard.add_backend(BACKEND backend, [STRING ident], [DURATION rampup],
392       [REAL weight])
393          BOOL xshard.add_backend(
394                BACKEND backend,
395                [STRING ident],
396                [DURATION rampup],
397                [REAL weight]
398          )
399
400       Add a backend backend to the director.
401
402       ident:  Optionally  specify  an identification string for this backend,
403       which will be hashed by xshard.reconfigure() to construct  the  consis‐
404       tent  hashing  ring.  The identification string defaults to the backend
405       name.
406
407       ident allows to add multiple instances of the same backend.
408
409       rampup: Optionally specify a specific rampup  time  for  this  backend.
410       Otherwise,    the    per-director    rampup    time    is   used   (see
411       xshard.set_rampup()).
412
413       weight: Optionally specify a weight to scale  the  xshard.reconfigure()
414       replicas  parameter.  weight  is limited to at least 1. Values above 10
415       probably do not make much sense. The effect of weight  is  also  capped
416       such that the total number of replicas does not exceed UINT32_MAX.
417
418       NOTE:  Backend  changes  need to be finalized with xshard.reconfigure()
419       and are only supported on one shard director at a time.
420
421   BOOL xshard.remove_backend([BACKEND backend], [STRING ident])
422          BOOL xshard.remove_backend(
423                [BACKEND backend=0],
424                [STRING ident=0]
425          )
426
427       Remove backend(s) from the director. Either backend or  ident  must  be
428       specified. ident removes a specific instance. If backend is given with‐
429       out ident, all instances of this backend are removed.
430
431       NOTE: Backend changes need to be  finalized  with  xshard.reconfigure()
432       and are only supported on one shard director at a time.
433
434   BOOL xshard.clear()
435       Remove all backends from the director.
436
437       NOTE:  Backend  changes  need to be finalized with xshard.reconfigure()
438       and are only supported on one shard director at a time.
439
440   BOOL xshard.reconfigure(INT replicas=67)
441       Reconfigure the consistent hashing ring to reflect backend changes.
442
443       This method must be called at least once before  the  director  can  be
444       used.
445
446   INT xshard.key(STRING)
447       Convenience  method to generate a sharding key for use with the key ar‐
448       gument to the xshard.backend() method by hashing the given string  with
449       SHA256.
450
451       To  generate  sharding  keys using other hashes, use a custom vmod like
452       vmod blobdigest with the  key_blob  argument  of  the  xshard.backend()
453       method.
454
455   BACKEND  xshard.backend([ENUM  by],  [INT key], [BLOB key_blob], [INT alt],
456       [REAL warmup], [BOOL rampup], [ENUM healthy], [BLOB param],  [ENUM  re‐
457       solve])
458          BACKEND xshard.backend(
459                [ENUM {HASH, URL, KEY, BLOB} by=HASH],
460                [INT key],
461                [BLOB key_blob],
462                [INT alt=0],
463                [REAL warmup=-1],
464                [BOOL rampup=1],
465                [ENUM {CHOSEN, IGNORE, ALL} healthy=CHOSEN],
466                [BLOB param],
467                [ENUM {NOW, LAZY} resolve]
468          )
469
470       Lookup a backend on the consistent hashing ring.
471
472       This  documentation  uses the notion of an order of backends for a par‐
473       ticular shard key. This order is deterministic but seemingly random  as
474       determined  by the consistent hashing algorithm and is likely to differ
475       for different keys, depending on the number of backends and the  number
476       of replicas. In particular, the backend order referred to here is _not_
477       the order given when backends are added.
478
479by how to determine the sharding key
480
481HASH:
482
483           • when called in backend context and in vcl_pipe {}: Use  the  var‐
484             nish hash value as set by vcl_hash{}
485
486           • when  called  in  client  context  other  than  vcl_pipe {}: hash
487             req.url
488
489URL: hash req.url / bereq.url
490
491KEY: use the key argument
492
493BLOB: use the key_blob argument
494
495key lookup key with by=KEY
496
497         the xshard.key() method may come handy to  generate  a  sharding  key
498         from custom strings.
499
500key_blob lookup key with by=BLOB
501
502         Currently, this uses the first 4 bytes from the given blob in network
503         byte order (big endian), left-padded with  zeros  for  blobs  smaller
504         than 4 bytes.
505
506alt alternative backend selection
507
508         Select the alt-th alternative backend for the given key.
509
510         This is particularly useful for retries / restarts due to backend er‐
511         rors:  By  setting   alt=req.restarts   or   alt=bereq.retries   with
512         healthy=ALL, another server gets selected.
513
514         The rampup and warmup features are only active for alt==0
515
516rampup slow start for servers which just went healthy
517
518         If  alt==0  and  the  chosen  backend is in its rampup period, with a
519         probability proportional to the fraction of time since the backup be‐
520         came  healthy to the rampup period, return the next alternative back‐
521         end, unless this is also in its rampup period.
522
523         The default rampup interval can be set per shard director  using  the
524         xshard.set_rampup()  method  or  specifically  per  backend  with the
525         xshard.add_backend() method.
526
527warmup probabilistic alternative server selection
528
529         possible values: -1, 0..1
530
531         -1: use the warmup probability from the director definition
532
533         Only used for alt==0: Sets the ratio of requests (0.0  to  1.0)  that
534         goes  to  the next alternate backend to warm it up when the preferred
535         backend is healthy. Not active if any of the preferred or alternative
536         backend are in rampup.
537
538         warmup=0.5  is  a convenient way to spread the load for each key over
539         two backends under normal operating conditions.
540
541healthy
542
543         • CHOSEN: Return a healthy backend if possible.
544
545           For alt==0, return the first healthy backend or none.
546
547           For alt > 0, ignore the health state of backends skipped for alter‐
548           native  backend selection, then return the next healthy backend. If
549           this does not exist, return  the  last  healthy  backend  of  those
550           skipped or none.
551
552         • IGNORE: Completely ignore backend health state
553
554           Just  return  the  first  or  alt-th  alternative backend, ignoring
555           health state, rampup and warmup.
556
557         • ALL: Check health state also for alternative backend selection
558
559           For alt > 0, return the alt-th alternative  backend  of  all  those
560           healthy, the last healthy backend found or none.
561
562resolve
563
564         default: LAZY in vcl_init{}, NOW otherwise
565
566NOW: look up a backend and return it.
567
568           Can not be used in vcl_init{}.
569
570LAZY: return an instance of this director for later backend resolu‐
571           tion.
572
573           LAZY mode is required for referencing shard director instances, for
574           example as backends for other directors (director layering).
575
576           In  vcl_init{}  and  on  the client side, LAZY mode can not be used
577           with any other argument.
578
579           On the backend side and in vcl_pipe {}, parameters  from  arguments
580           or  an  associated parameter set affect the shard director instance
581           for the backend request irrespective of where it is referenced.
582
583param
584
585         Use or associate a parameter set. The value  of  the  param  argument
586         must be a call to the xshard_param.use() method.
587
588         default: as set by xshard.associate() or unset.
589
590         • for     resolve=NOW    take    parameter    defaults    from    the
591           directors.shard_param() parameter set
592
593         • for resolve=LAZY associate  the  directors.shard_param()  parameter
594           set for this backend request
595
596           Implementation notes for use of parameter sets with resolve=LAZY:
597
598           • A  param argument remains associated and any changes to the asso‐
599             ciated parameter set affect the sharding decision once the direc‐
600             tor resolves to an actual backend.
601
602           • If other parameter arguments are also given, they have preference
603             and are kept even if the parameter set given by the  param  argu‐
604             ment is subsequently changed within the same backend request.
605
606           • Each call to xshard.backend() overrides any previous call.
607
608   VOID xshard.debug(INT)
609       intentionally undocumented
610
611   new xshard_param = directors.shard_param()
612       Create a shard parameter set.
613
614       A  parameter set allows for re-use of xshard.backend() arguments across
615       many shard director instances and simplifies advanced use  cases  (e.g.
616       shard director with custom parameters layered below other directors).
617
618       Parameter sets have two scopes:
619
620       • per-VCL scope defined in vcl_init{}
621
622       • per backend request scope
623
624       The  per-VCL  scope  defines  defaults  for  the per backend scope. Any
625       changes to a parameter set in backend context and in vcl_pipe  {}  only
626       affect the respective backend request.
627
628       Parameter  sets  can  not be used in client context except for vcl_pipe
629       {}.
630
631       The following example is a typical use case: A parameter set is associ‐
632       ated with several directors. Director choice happens on the client side
633       and parameters are changed on the backend side to implement retries  on
634       alternative backends:
635
636          sub vcl_init {
637            new shard_param = directors.shard_param();
638
639            new dir_A = directors.shard();
640            dir_A.add_backend(...);
641            dir_A.reconfigure(shard_param);
642            dir_A.associate(shard_param.use()); # <-- !
643
644            new dir_B = directors.shard();
645            dir_B.add_backend(...);
646            dir_B.reconfigure(shard_param);
647            dir_B.associate(shard_param.use()); # <-- !
648          }
649
650          sub vcl_recv {
651            if (...) {
652              set req.backend_hint = dir_A.backend(resolve=LAZY);
653            } else {
654              set req.backend_hint = dir_B.backend(resolve=LAZY);
655            }
656          }
657
658          sub vcl_backend_fetch {
659            # changes dir_A and dir_B behaviour
660            shard_param.set(alt=bereq.retries);
661          }
662
663   VOID xshard_param.clear()
664       Reset   the   parameter   set  to  default  values  as  documented  for
665       xshard.backend().
666
667       • in vcl_init{}, resets the parameter set default for this VCL in
668
669       • backend context and in vcl_pipe {}, resets the parameter set for this
670         backend request to the VCL defaults
671
672       This method may not be used in client context other than vcl_pipe {}.
673
674   VOID  xshard_param.set([ENUM  by],  [INT  key], [BLOB key_blob], [INT alt],
675       [REAL warmup], [BOOL rampup], [ENUM healthy])
676          VOID xshard_param.set(
677                [ENUM {HASH, URL, KEY, BLOB} by],
678                [INT key],
679                [BLOB key_blob],
680                [INT alt],
681                [REAL warmup],
682                [BOOL rampup],
683                [ENUM {CHOSEN, IGNORE, ALL} healthy]
684          )
685
686       Change the given parameters  of  a  parameter  set  as  documented  for
687       xshard.backend().
688
689       • in vcl_init{}, changes the parameter set default for this VCL
690
691       • in  backend context and in vcl_pipe {}, changes the parameter set for
692         this backend request, keeping the defaults set for this VCL  for  un‐
693         specified arguments.
694
695       This method may not be used in client context other than vcl_pipe {}.
696
697   STRING xshard_param.get_by()
698       Get a string representation of the by enum argument which denotes how a
699       shard director using this parameter object would derive the shard  key.
700       See xshard.backend().
701
702   INT xshard_param.get_key()
703       Get  the  key  which a shard director using this parameter object would
704       use. See xshard.backend().
705
706   INT xshard_param.get_alt()
707       Get the alt parameter which a shard director using this  parameter  ob‐
708       ject would use. See xshard.backend().
709
710   REAL xshard_param.get_warmup()
711       Get  the  warmup  parameter which a shard director using this parameter
712       object would use. See xshard.backend().
713
714   BOOL xshard_param.get_rampup()
715       Get the rampup parameter which a shard director  using  this  parameter
716       object would use. See xshard.backend().
717
718   STRING xshard_param.get_healthy()
719       Get  a string representation of the healthy enum argument which a shard
720       director using this parameter object would use. See xshard.backend().
721
722   BLOB xshard_param.use()
723       This method may only be used in backend context and in vcl_pipe {}.
724
725       For use with the param argument of xshard.backend() to  associate  this
726       shard parameter set with a shard director.
727
728   BACKEND lookup(STRING)
729       Lookup a backend by its name.
730
731       This function can only be used from vcl_init{} and  vcl_fini{}.
732

ACKNOWLEDGEMENTS

734       Development  of  a  previous  version  of the shard director was partly
735       sponsored by Deutsche Telekom AG - Products & Innovation.
736
737       Development of a previous version of  the  shard  director  was  partly
738       sponsored by BILD GmbH & Co KG.
739
741          This document is licensed under the same licence as Varnish
742          itself. See LICENCE for details.
743
744          Copyright (c) 2013-2015 Varnish Software AS
745          Copyright 2009-2018 UPLEX - Nils Goroll Systemoptimierung
746          All rights reserved.
747
748          Authors: Poul-Henning Kamp <phk@FreeBSD.org>
749                   Julian Wiesener <jw@uplex.de>
750                   Nils Goroll <slink@uplex.de>
751                   Geoffrey Simmons <geoff@uplex.de>
752
753          SPDX-License-Identifier: BSD-2-Clause
754
755          Redistribution and use in source and binary forms, with or without
756          modification, are permitted provided that the following conditions
757          are met:
758          1. Redistributions of source code must retain the above copyright
759             notice, this list of conditions and the following disclaimer.
760          2. Redistributions in binary form must reproduce the above copyright
761             notice, this list of conditions and the following disclaimer in the
762             documentation and/or other materials provided with the distribution.
763
764          THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
765          ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
766          IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
767          ARE DISCLAIMED.  IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
768          FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
769          DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
770          OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
771          HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
772          LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
773          OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
774          SUCH DAMAGE.
775
776
777
778
779                                                             VMOD_DIRECTORS(3)
Impressum