1TEAMD.CONF(5)              Team daemon configuration             TEAMD.CONF(5)
2
3
4

NAME

6       teamd.conf — libteam daemon configuration file
7

DESCRIPTION

9       teamd uses JSON format configuration.
10

OPTIONS

12       device (string)
13              Desired name of new team device.
14
15       debug_level (int)
16              Level  of  debug  messages. The higher it is the more debug mes‐
17              sages will be printed. It is the same  as  adding  "-g"  command
18              line options.
19
20              Default: 0 (disabled)
21
22       hwaddr (string)
23              Desired  hardware  address of new team device. Usual MAC address
24              format is accepted.
25
26       runner.name (string)
27              Name of team device. The following runners are available:
28
29              broadcast — Simple runner  which  directs  the  team  device  to
30              transmit packets via all ports.
31
32              roundrobin  —  Simple  runner  which  directs the team device to
33              transmits packets in a round-robin fashion.
34
35              random — Simple runner which directs the team device  to  trans‐
36              mits packets on a randomly selected port.
37
38              activebackup  — Watches for link changes and selects active port
39              to be used for data transfers.
40
41              loadbalance — To do passive load balancing, runner only sets  up
42              BPF hash function which will determine port for packet transmit.
43              To do active load balancing, runner moves hashes among available
44              ports trying to reach perfect balance.
45
46              lacp  —  Implements  802.3ad LACP protocol. Can use same Tx port
47              selection possibilities as loadbalance runner.
48
49              Default: roundrobin
50
51       notify_peers.count (int)
52              Number of bursts of unsolicited NAs and gratuitous  ARP  packets
53              sent after port is enabled or disabled.
54
55              Default: 0 (disabled)
56
57              Default for activebackup runner: 1
58
59       notify_peers.interval (int)
60              Value  is positive number in milliseconds. Specifies an interval
61              between bursts of notify-peer packets.
62
63              Default: 0
64
65       mcast_rejoin.count (int)
66              Number of bursts of multicast group rejoin requests  sent  after
67              port is enabled or disabled.
68
69              Default: 0 (disabled)
70
71              Default for activebackup runner: 1
72
73       mcast_rejoin.interval (int)
74              Value  is positive number in milliseconds. Specifies an interval
75              between bursts of multicast group rejoin requests.
76
77              Default: 0
78
79       link_watch.name | ports.PORTIFNAME.link_watch.name (string)
80              Name of link watcher to be used. The following link watchers are
81              available:
82
83              ethtool — Uses Libteam lib to get port ethtool state changes.
84
85              arp_ping — ARP requests are sent through a port. If an ARP reply
86              is received, the link is considered to be up.
87
88              nsna_ping — Similar to the previous, except that  it  uses  IPv6
89              Neighbor  Solicitation  / Neighbor Advertisement mechanism. This
90              is an alternative to arp_ping and  becomes  handy  in  pure-IPv6
91              environments.
92
93       ports (object)
94              List of ports, network devices, to be used in a team device.
95
96              See examples for more information.
97
98       ports.PORTIFNAME.queue_id (int)
99              ID of queue which this port should be mapped to.
100
101              Default: None
102

ACTIVE-BACKUP RUNNER SPECIFIC OPTIONS

104       runner.hwaddr_policy (string)
105              This defines the policy of how hardware addresses of team device
106              and port devices should be set during  the  team  lifetime.  The
107              following are available:
108
109              same_all  — All ports will always have the same hardware address
110              as the associated team device.
111
112              by_active — Team device adopts the hardware address of the  cur‐
113              rently  active  port. This is useful when the port device is not
114              able to change its hardware address.
115
116              only_active — Only the active port adopts the  hardware  address
117              of the team device. The others have their own.
118
119              Default: same_all
120
121       ports.PORTIFNAME.prio (int)
122              Port priority. The higher number means higher priority.
123
124              Default: 0
125
126       ports.PORTIFNAME.sticky (bool)
127              Flag which indicates if the port is sticky. If set, it means the
128              port does not get unselected if another port with higher  prior‐
129              ity or better parameters becomes available.
130
131              Default: false
132

LOAD BALANCE RUNNER SPECIFIC OPTIONS

134       runner.tx_hash (array)
135              List of fragment types (strings) which should be used for packet
136              Tx hash computation. The following are available:
137
138              eth — Uses source and destination MAC addresses.
139
140              vlan — Uses VLAN id.
141
142              ipv4 — Uses source and destination IPv4 addresses.
143
144              ipv6 — Uses source and destination IPv6 addresses.
145
146              ip — Uses source and destination IPv4 and IPv6 addresses.
147
148              l3 — Uses source and destination IPv4 and IPv6 addresses.
149
150              tcp — Uses source and destination TCP ports.
151
152              udp — Uses source and destination UDP ports.
153
154              sctp — Uses source and destination SCTP ports.
155
156              l4 — Uses source and destination TCP and UDP and SCTP ports.
157
158              Default: ["eth", "ipv4", "ipv6"]
159
160       runner.tx_balancer.name (string)
161              Name of active Tx balancer. Active Tx balancing is  disabled  by
162              default. The only value available is basic.
163
164              Default: None
165
166       runner.tx_balancer.balancing_interval (int)
167              In tenths of a second. Periodic interval between rebalancing.
168
169              Default: 50
170

LACP RUNNER SPECIFIC OPTIONS

172       runner.active (bool)
173              If  active  is  true LACPDU frames are sent along the configured
174              links periodically. If not, it acts as "speak when spoken to".
175
176              Default: true
177
178       runner.fast_rate (bool)
179              Option specifies the rate at which our link partner is asked  to
180              transmit  LACPDU  packets.  If this is true then packets will be
181              sent once per second. Otherwise they will be sent every 30  sec‐
182              onds.
183
184              Default: false
185
186       runner.tx_hash (array)
187              Same as for load balance runner.
188
189       runner.tx_balancer.name (string)
190              Same as for load balance runner.
191
192       runner.tx_balancer.balancing_interval (int)
193              Same as for load balance runner.
194
195       runner.sys_prio (int)
196              System priority, value can be 0 – 65535.
197
198              Default: 65535
199
200       runner.min_ports (int)
201              Specifies the minimum number of ports that must be active before
202              asserting carrier in the master interface, value can be 1 – 255.
203
204              Default: 1
205
206       runner.agg_select_policy (string)
207              This selects the policy of how the aggregators will be selected.
208              The following are available:
209
210              lacp_prio  —  Aggregator with highest priority according to LACP
211              standard will be selected. Aggregator priority  is  affected  by
212              per-port option lacp_prio.
213
214              lacp_prio_stable  —  Same as previous one, except do not replace
215              selected aggregator if it is still usable.
216
217              bandwidth — Select aggregator with highest total bandwidth.
218
219              count — Select aggregator with highest number of ports.
220
221              port_config — Aggregator with highest priority according to per-
222              port  options  prio and sticky will be selected. This means that
223              the aggregator containing the port  with  the  highest  priority
224              will  be  selected  unless at least one of the ports in the cur‐
225              rently selected aggregator is sticky.
226
227              Default: lacp_prio
228
229       ports.PORTIFNAME.lacp_prio (int)
230              Port priority according to LACP standard. The lower number means
231              higher priority.
232
233              Default: 255
234
235       ports.PORTIFNAME.lacp_key (int)
236              Port  key  according  to  LACP  standard. It is only possible to
237              aggregate ports with the same key.
238
239              Default: 0
240
242       link_watch.delay_up | ports.PORTIFNAME.link_watch.delay_up (int)
243              Value is a positive number in  milliseconds.  It  is  the  delay
244              between  the  link coming up and the runner being notified about
245              it.
246
247              Default: 0
248
249       link_watch.delay_down | ports.PORTIFNAME.link_watch.delay_down (int)
250              Value is a positive number in  milliseconds.  It  is  the  delay
251              between  the link going down and the runner being notified about
252              it.
253
254              Default: 0
255
257       link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
258              Value is a positive number in milliseconds. It is  the  interval
259              between ARP requests being sent.
260
261              Default: 1000
262
263       link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
264              Value  is  a  positive  number  in milliseconds. It is the delay
265              between link watch initialization  and  the  first  ARP  request
266              being sent.
267
268              Default: 0
269
270       link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
271              Maximum  number  of  missed  ARP  replies.  If  this  number  is
272              exceeded, link is reported as down.
273
274              Default: 3
275
276       link_watch.source_host | ports.PORTIFNAME.link_watch.source_host (host‐
277       name)
278              Hostname to be converted to IP address which will be filled into
279              ARP request as source address.
280
281              Default: 0.0.0.0
282
283       link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
284       name)
285              Hostname to be converted to IP address which will be filled into
286              ARP request as destination address.
287
288       link_watch.validate_active     |      ports.PORTIFNAME.link_watch.vali‐
289       date_active (bool)
290              Validate  received  ARP  packets on active ports. If this is not
291              set, all incoming ARP packets  will  be  considered  as  a  good
292              reply.
293
294              Default: false
295
296       link_watch.validate_inactive     |    ports.PORTIFNAME.link_watch.vali‐
297       date_inactive (bool)
298              Validate received ARP packets on inactive ports. If this is  not
299              set,  all  incoming  ARP  packets  will  be considered as a good
300              reply.
301
302              Default: false
303
304       link_watch.vlanid | ports.PORTIFNAME.link_watch.vlanid (int)
305              By default, ARP requests are sent without VLAN tags. This option
306              causes  outgoing ARP requests to be sent with the specified VLAN
307              ID number.
308
309              Default: None
310
311       link_watch.send_always | ports.PORTIFNAME.link_watch.send_always (bool)
312              By default, ARP requests are sent on  active  ports  only.  This
313              option allows sending even on inactive ports.
314
315              Default: false
316
318       link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
319              Value  is  a positive number in milliseconds. It is the interval
320              between sending NS packets.
321
322              Default: 1000
323
324       link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
325              Value is a positive number in  milliseconds.  It  is  the  delay
326              between  link watch initialization and the first NS packet being
327              sent.
328
329       link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
330              Maximum number of missed NA reply packets.  If  this  number  is
331              exceeded, link is reported as down.
332
333              Default: 3
334
335       link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
336       name)
337              Hostname to be converted to IPv6 address which  will  be  filled
338              into NS packet as target address.
339

EXAMPLES

341       {
342         "device": "team0",
343         "runner": {"name": "roundrobin"},
344         "ports": {"eth1": {}, "eth2": {}}
345       }
346
347       Very basic configuration.
348
349       {
350         "device": "team0",
351         "runner": {"name": "activebackup"},
352         "link_watch": {"name": "ethtool"},
353         "ports": {
354           "eth1": {
355             "prio": -10,
356             "sticky": true
357           },
358           "eth2": {
359             "prio": 100
360           }
361         }
362       }
363
364       This configuration uses active-backup runner with ethtool link watcher.
365       Port eth2 has higher priority, but the sticky flag ensures that if eth1
366       becomes active, it stays active while the link remains up.
367
368       {
369         "device": "team0",
370         "runner": {"name": "activebackup"},
371         "link_watch": {
372           "name": "ethtool",
373           "delay_up": 2500,
374           "delay_down": 1000
375         },
376         "ports": {
377           "eth1": {
378             "prio": -10,
379             "sticky": true
380           },
381           "eth2": {
382             "prio": 100
383           }
384         }
385       }
386
387       Similar  to  the previous one. Only difference is that link changes are
388       not propagated to the runner immediately, but delays are applied.
389
390       {
391         "device": "team0",
392         "runner": {"name": "activebackup"},
393         "link_watch":     {
394           "name": "arp_ping",
395           "interval": 100,
396           "missed_max": 30,
397           "target_host": "192.168.23.1"
398         },
399         "ports": {
400           "eth1": {
401             "prio": -10,
402             "sticky": true
403           },
404           "eth2": {
405             "prio": 100
406           }
407         }
408       }
409
410       This configuration uses ARP ping link watch.
411
412       {
413       "device": "team0",
414       "runner": {"name": "activebackup"},
415       "link_watch": [
416         {
417           "name": "arp_ping",
418           "interval": 100,
419           "missed_max": 30,
420           "target_host": "192.168.23.1"
421         },
422         {
423           "name": "arp_ping",
424           "interval": 50,
425           "missed_max": 20,
426           "target_host": "192.168.24.1"
427         }
428       ],
429       "ports": {
430         "eth1": {
431           "prio": -10,
432           "sticky": true
433         },
434         "eth2": {
435           "prio": 100
436           }
437         }
438       }
439
440       Similar to the previous one, only this time two link watchers are  used
441       at the same time.
442
443       {
444         "device": "team0",
445         "runner": {
446           "name": "loadbalance",
447           "tx_hash": ["eth", "ipv4", "ipv6"]
448         },
449         "ports": {"eth1": {}, "eth2": {}}
450       }
451
452       Configuration for hash-based passive Tx load balancing.
453
454       {
455         "device": "team0",
456         "runner": {
457           "name": "loadbalance",
458           "tx_hash": ["eth", "ipv4", "ipv6"],
459           "tx_balancer": {
460             "name": "basic"
461           }
462         },
463         "ports": {"eth1": {}, "eth2": {}}
464       }
465
466       Configuration for active Tx load balancing using basic load balancer.
467
468       {
469         "device": "team0",
470         "runner": {
471           "name": "lacp",
472           "active": true,
473           "fast_rate": true,
474           "tx_hash": ["eth", "ipv4", "ipv6"]
475         },
476         "link_watch": {"name": "ethtool"},
477         "ports": {"eth1": {}, "eth2": {}}
478       }
479
480       Configuration for connection to LACP capable counterpart.
481

SEE ALSO

483       teamd(8), teamdctl(8), teamnl(8), bond2team(1)
484

AUTHOR

486       Jiri Pirko is the original author and current maintainer of libteam.
487
488
489
490libteam                           2013-07-09                     TEAMD.CONF(5)
Impressum