1TEAMD.CONF(5)              Team daemon configuration             TEAMD.CONF(5)
2
3
4

NAME

6       teamd.conf — libteam daemon configuration file
7

DESCRIPTION

9       teamd uses JSON format configuration.
10

OPTIONS

12       device (string)
13              Desired name of new team device.
14
15       debug_level (int)
16              Level  of  debug  messages. The higher it is the more debug mes‐
17              sages will be printed. It is the same  as  adding  "-g"  command
18              line options.
19
20              Default: 0 (disabled)
21
22       hwaddr (string)
23              Desired  hardware  address of new team device. Usual MAC address
24              format is accepted.
25
26       runner.name (string)
27              Name of team device. The following runners are available:
28
29              broadcast — Simple runner  which  directs  the  team  device  to
30              transmit packets via all ports.
31
32              roundrobin  —  Simple  runner  which  directs the team device to
33              transmits packets in a round-robin fashion.
34
35              random — Simple runner which directs the team device  to  trans‐
36              mits packets on a randomly selected port.
37
38              activebackup  — Watches for link changes and selects active port
39              to be used for data transfers.
40
41              loadbalance — To do passive load balancing, runner only sets  up
42              BPF hash function which will determine port for packet transmit.
43              To do active load balancing, runner moves hashes among available
44              ports trying to reach perfect balance.
45
46              lacp  —  Implements  802.3ad LACP protocol. Can use same Tx port
47              selection possibilities as loadbalance runner.
48
49       notify_peers.count (int)
50              Number of bursts of unsolicited NAs and gratuitous  ARP  packets
51              sent after port is enabled or disabled.
52
53              Default: 0 (disabled)
54
55              Default for activebackup runner: 1
56
57       notify_peers.interval (int)
58              Value  is positive number in milliseconds. Specifies an interval
59              between bursts of notify-peer packets.
60
61              Default: 0
62
63       mcast_rejoin.count (int)
64              Number of bursts of multicast group rejoin requests  sent  after
65              port is enabled or disabled.
66
67              Default: 0 (disabled)
68
69              Default for activebackup runner: 1
70
71       mcast_rejoin.interval (int)
72              Value  is positive number in milliseconds. Specifies an interval
73              between bursts of multicast group rejoin requests.
74
75              Default: 0
76
77       link_watch.name | ports.PORTIFNAME.link_watch.name (string)
78              Name of link watcher to be used. The following link watchers are
79              available:
80
81              ethtool — Uses Libteam lib to get port ethtool state changes.
82
83              arp_ping — ARP requests are sent through a port. If an ARP reply
84              is received, the link is considered to be up.
85
86              nsna_ping — Similar to the previous, except that  it  uses  IPv6
87              Neighbor  Solicitation  / Neighbor Advertisement mechanism. This
88              is an alternative to arp_ping and  becomes  handy  in  pure-IPv6
89              environments.
90
91       ports (object)
92              List of ports, network devices, to be used in a team device.
93
94              See examples for more information.
95
96       ports.PORTIFNAME.queue_id (int)
97              ID of queue which this port should be mapped to.
98
99              Default: None
100

ACTIVE-BACKUP RUNNER SPECIFIC OPTIONS

102       runner.hwaddr_policy (string)
103              This defines the policy of how hardware addresses of team device
104              and port devices should be set during  the  team  lifetime.  The
105              following are available:
106
107              same_all  — All ports will always have the same hardware address
108              as the associated team device.
109
110              by_active — Team device adopts the hardware address of the  cur‐
111              rently  active  port. This is useful when the port device is not
112              able to change its hardware address.
113
114              only_active — Only the active port adopts the  hardware  address
115              of the team device. The others have their own.
116
117              Default: same_all
118
119       ports.PORTIFNAME.prio (int)
120              Port priority. The higher number means higher priority.
121
122              Default: 0
123
124       ports.PORTIFNAME.sticky (bool)
125              Flag which indicates if the port is sticky. If set, it means the
126              port does not get unselected if another port with higher  prior‐
127              ity or better parameters becomes available.
128
129              Default: false
130

LOAD BALANCE RUNNER SPECIFIC OPTIONS

132       runner.tx_hash (array)
133              List of fragment types (strings) which should be used for packet
134              Tx hash computation. The following are available:
135
136              eth — Uses source and destination MAC addresses.
137
138              vlan — Uses VLAN id.
139
140              ipv4 — Uses source and destination IPv4 addresses.
141
142              ipv6 — Uses source and destination IPv6 addresses.
143
144              ip — Uses source and destination IPv4 and IPv6 addresses.
145
146              l3 — Uses source and destination IPv4 and IPv6 addresses.
147
148              tcp — Uses source and destination TCP ports.
149
150              udp — Uses source and destination UDP ports.
151
152              sctp — Uses source and destination SCTP ports.
153
154              l4 — Uses source and destination TCP and UDP and SCTP ports.
155
156       runner.tx_balancer.name (string)
157              Name of active Tx balancer. Active Tx balancing is  disabled  by
158              default. The only value available is basic.
159
160              Default: None
161
162       runner.tx_balancer.balancing_interval (int)
163              In tenths of a second. Periodic interval between rebalancing.
164
165              Default: 50
166

LACP RUNNER SPECIFIC OPTIONS

168       runner.active (bool)
169              If  active  is  true LACPDU frames are sent along the configured
170              links periodically. If not, it acts as "speak when spoken to".
171
172              Default: true
173
174       runner.fast_rate (bool)
175              Option specifies the rate at which our link partner is asked  to
176              transmit  LACPDU  packets.  If this is true then packets will be
177              sent once per second. Otherwise they will be sent every 30  sec‐
178              onds.
179
180       runner.tx_hash (array)
181              Same as for load balance runner.
182
183       runner.tx_balancer.name (string)
184              Same as for load balance runner.
185
186       runner.tx_balancer.balancing_interval (int)
187              Same as for load balance runner.
188
189       runner.sys_prio (int)
190              System priority, value can be 0 – 65535.
191
192              Default: 65535
193
194       runner.min_ports (int)
195              Specifies the minimum number of ports that must be active before
196              asserting carrier in the master interface, value can be 1 – 255.
197
198              Default: 1
199
200       runner.agg_select_policy (string)
201              This selects the policy of how the aggregators will be selected.
202              The following are available:
203
204              lacp_prio  —  Aggregator with highest priority according to LACP
205              standard will be selected. Aggregator priority  is  affected  by
206              per-port option lacp_prio.
207
208              lacp_prio_stable  —  Same as previous one, except do not replace
209              selected aggregator if it is still usable.
210
211              bandwidth — Select aggregator with highest total bandwidth.
212
213              count — Select aggregator with highest number of ports.
214
215              port_config — Aggregator with highest priority according to per-
216              port  options  prio and sticky will be selected. This means that
217              the aggregator containing the port  with  the  highest  priority
218              will  be  selected  unless at least one of the ports in the cur‐
219              rently selected aggregator is sticky.
220
221              Default: lacp_prio
222
223       ports.PORTIFNAME.lacp_prio (int)
224              Port priority according to LACP standard. The lower number means
225              higher priority.
226
227              Default: 255
228
229       ports.PORTIFNAME.lacp_key (int)
230              Port  key  according  to  LACP  standard. It is only possible to
231              aggregate ports with the same key.
232
233              Default: 0
234
236       link_watch.delay_up | ports.PORTIFNAME.link_watch.delay_up (int)
237              Value is a positive number in  milliseconds.  It  is  the  delay
238              between  the  link coming up and the runner being notified about
239              it.
240
241              Default: 0
242
243       link_watch.delay_down | ports.PORTIFNAME.link_watch.delay_down (int)
244              Value is a positive number in  milliseconds.  It  is  the  delay
245              between  the link going down and the runner being notified about
246              it.
247
248              Default: 0
249
251       link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
252              Value is a positive number in milliseconds. It is  the  interval
253              between ARP requests being sent.
254
255              Default: 1000
256
257       link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
258              Value  is  a  positive  number  in milliseconds. It is the delay
259              between link watch initialization  and  the  first  ARP  request
260              being sent.
261
262              Default: 0
263
264       link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
265              Maximum  number  of  missed  ARP  replies.  If  this  number  is
266              exceeded, link is reported as down.
267
268              Default: 3
269
270       link_watch.source_host | ports.PORTIFNAME.link_watch.source_host (host‐
271       name)
272              Hostname to be converted to IP address which will be filled into
273              ARP request as source address.
274
275              Default: 0.0.0.0
276
277       link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
278       name)
279              Hostname to be converted to IP address which will be filled into
280              ARP request as destination address.
281
282       link_watch.validate_active     |      ports.PORTIFNAME.link_watch.vali‐
283       date_active (bool)
284              Validate  received  ARP  packets on active ports. If this is not
285              set, all incoming ARP packets  will  be  considered  as  a  good
286              reply.
287
288              Default: false
289
290       link_watch.validate_inactive     |    ports.PORTIFNAME.link_watch.vali‐
291       date_inactive (bool)
292              Validate received ARP packets on inactive ports. If this is  not
293              set,  all  incoming  ARP  packets  will  be considered as a good
294              reply.
295
296              Default: false
297
298       link_watch.vlanid | ports.PORTIFNAME.link_watch.vlanid (int)
299              By default, ARP requests are sent without VLAN tags. This option
300              causes  outgoing ARP requests to be sent with the specified VLAN
301              ID number.
302
303              Default: None
304
305       link_watch.send_always | ports.PORTIFNAME.link_watch.send_always (bool)
306              By default, ARP requests are sent on  active  ports  only.  This
307              option allows sending even on inactive ports.
308
309              Default: false
310
312       link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
313              Value  is  a positive number in milliseconds. It is the interval
314              between sending NS packets.
315
316              Default: 1000
317
318       link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
319              Value is a positive number in  milliseconds.  It  is  the  delay
320              between  link watch initialization and the first NS packet being
321              sent.
322
323       link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
324              Maximum number of missed NA reply packets.  If  this  number  is
325              exceeded, link is reported as down.
326
327              Default: 3
328
329       link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
330       name)
331              Hostname to be converted to IPv6 address which  will  be  filled
332              into NS packet as target address.
333

EXAMPLES

335       {
336         "device": "team0",
337         "runner": {"name": "roundrobin"},
338         "ports": {"eth1": {}, "eth2": {}}
339       }
340
341       Very basic configuration.
342
343       {
344         "device": "team0",
345         "runner": {"name": "activebackup"},
346         "link_watch": {"name": "ethtool"},
347         "ports": {
348           "eth1": {
349             "prio": -10,
350             "sticky": true
351           },
352           "eth2": {
353             "prio": 100
354           }
355         }
356       }
357
358       This configuration uses active-backup runner with ethtool link watcher.
359       Port eth2 has higher priority, but the sticky flag ensures that if eth1
360       becomes active, it stays active while the link remains up.
361
362       {
363         "device": "team0",
364         "runner": {"name": "activebackup"},
365         "link_watch": {
366           "name": "ethtool",
367           "delay_up": 2500,
368           "delay_down": 1000
369         },
370         "ports": {
371           "eth1": {
372             "prio": -10,
373             "sticky": true
374           },
375           "eth2": {
376             "prio": 100
377           }
378         }
379       }
380
381       Similar  to  the previous one. Only difference is that link changes are
382       not propagated to the runner immediately, but delays are applied.
383
384       {
385         "device": "team0",
386         "runner": {"name": "activebackup"},
387         "link_watch":     {
388           "name": "arp_ping",
389           "interval": 100,
390           "missed_max": 30,
391           "target_host": "192.168.23.1"
392         },
393         "ports": {
394           "eth1": {
395             "prio": -10,
396             "sticky": true
397           },
398           "eth2": {
399             "prio": 100
400           }
401         }
402       }
403
404       This configuration uses ARP ping link watch.
405
406       {
407       "device": "team0",
408       "runner": {"name": "activebackup"},
409       "link_watch": [
410         {
411           "name": "arp_ping",
412           "interval": 100,
413           "missed_max": 30,
414           "target_host": "192.168.23.1"
415         },
416         {
417           "name": "arp_ping",
418           "interval": 50,
419           "missed_max": 20,
420           "target_host": "192.168.24.1"
421         }
422       ],
423       "ports": {
424         "eth1": {
425           "prio": -10,
426           "sticky": true
427         },
428         "eth2": {
429           "prio": 100
430           }
431         }
432       }
433
434       Similar to the previous one, only this time two link watchers are  used
435       at the same time.
436
437       {
438         "device": "team0",
439         "runner": {
440           "name": "loadbalance",
441           "tx_hash": ["eth", "ipv4", "ipv6"]
442         },
443         "ports": {"eth1": {}, "eth2": {}}
444       }
445
446       Configuration for hash-based passive Tx load balancing.
447
448       {
449         "device": "team0",
450         "runner": {
451           "name": "loadbalance",
452           "tx_hash": ["eth", "ipv4", "ipv6"],
453           "tx_balancer": {
454             "name": "basic"
455           }
456         },
457         "ports": {"eth1": {}, "eth2": {}}
458       }
459
460       Configuration for active Tx load balancing using basic load balancer.
461
462       {
463         "device": "team0",
464         "runner": {
465           "name": "lacp",
466           "active": true,
467           "fast_rate": true,
468           "tx_hash": ["eth", "ipv4", "ipv6"]
469         },
470         "link_watch": {"name": "ethtool"},
471         "ports": {"eth1": {}, "eth2": {}}
472       }
473
474       Configuration for connection to LACP capable counterpart.
475

SEE ALSO

477       teamd(8), teamdctl(8), teamnl(8), bond2team(1)
478

AUTHOR

480       Jiri Pirko is the original author and current maintainer of libteam.
481
482
483
484libteam                           2013-07-09                     TEAMD.CONF(5)
Impressum