1TEAMD.CONF(5)              Team daemon configuration             TEAMD.CONF(5)
2
3
4

NAME

6       teamd.conf — libteam daemon configuration file
7

DESCRIPTION

9       teamd uses JSON format configuration.
10

OPTIONS

12       device (string)
13              Desired name of new team device.
14
15       debug_level (int)
16              Level  of  debug  messages. The higher it is the more debug mes‐
17              sages will be printed. It is the same  as  adding  "-g"  command
18              line options.
19
20              Default: 0 (disabled)
21
22       hwaddr (string)
23              Desired  hardware  address of new team device. Usual MAC address
24              format is accepted.
25
26       runner.name (string)
27              Name of team device. The following runners are available:
28
29              broadcast — Simple runner  which  directs  the  team  device  to
30              transmit packets via all ports.
31
32              roundrobin  —  Simple  runner  which  directs the team device to
33              transmits packets in a round-robin fashion.
34
35              random — Simple runner which directs the team device  to  trans‐
36              mits packets on a randomly selected port.
37
38              activebackup  — Watches for link changes and selects active port
39              to be used for data transfers.
40
41              loadbalance — To do passive load balancing, runner only sets  up
42              BPF hash function which will determine port for packet transmit.
43              To do active load balancing, runner moves hashes among available
44              ports trying to reach perfect balance.
45
46              lacp  —  Implements  802.3ad LACP protocol. Can use same Tx port
47              selection possibilities as loadbalance runner.
48
49       notify_peers.count (int)
50              Number of bursts of unsolicited NAs and gratuitous  ARP  packets
51              sent after port is enabled or disabled.
52
53              Default: 0 (disabled)
54
55              Default for activebackup runner: 1
56
57       notify_peers.interval (int)
58              Value  is positive number in milliseconds. Specifies an interval
59              between bursts of notify-peer packets.
60
61              Default: 0
62
63       mcast_rejoin.count (int)
64              Number of bursts of multicast group rejoin requests  sent  after
65              port is enabled or disabled.
66
67              Default: 0 (disabled)
68
69              Default for activebackup runner: 1
70
71       mcast_rejoin.interval (int)
72              Value  is positive number in milliseconds. Specifies an interval
73              between bursts of multicast group rejoin requests.
74
75              Default: 0
76
77       link_watch.name | ports.PORTIFNAME.link_watch.name (string)
78              Name of link watcher to be used. The following link watchers are
79              available:
80
81              ethtool — Uses Libteam lib to get port ethtool state changes.
82
83              arp_ping — ARP requests are sent through a port. If an ARP reply
84              is received, the link is considered to be up.
85
86              nsna_ping — Similar to the previous, except that  it  uses  IPv6
87              Neighbor  Solicitation  / Neighbor Advertisement mechanism. This
88              is an alternative to arp_ping and  becomes  handy  in  pure-IPv6
89              environments.
90
91       ports (object)
92              List of ports, network devices, to be used in a team device.
93
94              See examples for more information.
95
96       ports.PORTIFNAME.queue_id (int)
97              ID of queue which this port should be mapped to.
98
99              Default: None
100

ACTIVE-BACKUP RUNNER SPECIFIC OPTIONS

102       runner.hwaddr_policy (string)
103              This defines the policy of how hardware addresses of team device
104              and port devices should be set during  the  team  lifetime.  The
105              following are available:
106
107              same_all  — All ports will always have the same hardware address
108              as the associated team device.
109
110              by_active — Team device adopts the hardware address of the  cur‐
111              rently  active  port. This is useful when the port device is not
112              able to change its hardware address.
113
114              only_active — Only the active port adopts the  hardware  address
115              of the team device. The others have their own.
116
117              Default: same_all
118
119       ports.PORTIFNAME.prio (int)
120              Port priority. The higher number means higher priority.
121
122              Default: 0
123
124       ports.PORTIFNAME.sticky (bool)
125              Flag which indicates if the port is sticky. If set, it means the
126              port does not get unselected if another port with higher  prior‐
127              ity or better parameters becomes available.
128
129              Default: false
130

LOAD BALANCE RUNNER SPECIFIC OPTIONS

132       runner.tx_hash (array)
133              List of fragment types (strings) which should be used for packet
134              Tx hash computation. The following are available:
135
136              eth — Uses source and destination MAC addresses.
137
138              vlan — Uses VLAN id.
139
140              ipv4 — Uses source and destination IPv4 addresses.
141
142              ipv6 — Uses source and destination IPv6 addresses.
143
144              ip — Uses source and destination IPv4 and IPv6 addresses.
145
146              l3 — Uses source and destination IPv4 and IPv6 addresses.
147
148              tcp — Uses source and destination TCP ports.
149
150              udp — Uses source and destination UDP ports.
151
152              sctp — Uses source and destination SCTP ports.
153
154              l4 — Uses source and destination TCP and UDP and SCTP ports.
155
156       runner.tx_balancer.name (string)
157              Name of active Tx balancer. Active Tx balancing is  disabled  by
158              default. The only value available is basic.
159
160              Default: None
161
162       runner.tx_balancer.balancing_interval (int)
163              In tenths of a second. Periodic interval between rebalancing.
164
165              Default: 50
166

LACP RUNNER SPECIFIC OPTIONS

168       runner.active (bool)
169              If  active  is  true LACPDU frames are sent along the configured
170              links periodically. If not, it acts as "speak when spoken to".
171
172              Default: true
173
174       runner.fast_rate (bool)
175              Option specifies the rate at which our link partner is asked  to
176              transmit  LACPDU  packets.  If this is true then packets will be
177              sent once per second. Otherwise they will be sent every 30  sec‐
178              onds.
179
180       runner.tx_hash (array)
181              Same as for load balance runner.
182
183       runner.tx_balancer.name (string)
184              Same as for load balance runner.
185
186       runner.tx_balancer.balancing_interval (int)
187              Same as for load balance runner.
188
189       runner.sys_prio (int)
190              System priority, value can be 0 – 65535.
191
192              Default: 65535
193
194       runner.min_ports (int)
195              Specifies the minimum number of ports that must be active before
196              asserting carrier in the master interface, value can be 1 – 255.
197
198              Default: 0
199
200       runner.agg_select_policy (string)
201              This selects the policy of how the aggregators will be selected.
202              The following are available:
203
204              lacp_prio  —  Aggregator with highest priority according to LACP
205              standard will be selected. Aggregator priority  is  affected  by
206              per-port option lacp_prio.
207
208              lacp_prio_stable  —  Same as previous one, except do not replace
209              selected aggregator if it is still usable.
210
211              bandwidth — Select aggregator with highest total bandwidth.
212
213              count — Select aggregator with highest number of ports.
214
215              port_config — Aggregator with highest priority according to per-
216              port  options  prio and sticky will be selected. This means that
217              the aggregator containing the port  with  the  highest  priority
218              will  be  selected  unless at least one of the ports in the cur‐
219              rently selected aggregator is sticky.
220
221              Default: lacp_prio
222
223       ports.PORTIFNAME.lacp_prio (int)
224              Port priority according to LACP standard. The lower number means
225              higher priority.
226
227              Default: 255
228
229       ports.PORTIFNAME.lacp_key (int)
230              Port  key  according  to  LACP  standard. It is only possible to
231              aggregate ports with the same key.
232
233              Default: 0
234
236       link_watch.delay_up | ports.PORTIFNAME.link_watch.delay_up (int)
237              Value is a positive number in  milliseconds.  It  is  the  delay
238              between  the  link coming up and the runner being notified about
239              it.
240
241              Default: 0
242
243       link_watch.delay_down | ports.PORTIFNAME.link_watch.delay_down (int)
244              Value is a positive number in  milliseconds.  It  is  the  delay
245              between  the link going down and the runner being notified about
246              it.
247
248              Default: 0
249
251       link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
252              Value is a positive number in milliseconds. It is  the  interval
253              between ARP requests being sent.
254
255       link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
256              Value  is  a  positive  number  in milliseconds. It is the delay
257              between link watch initialization  and  the  first  ARP  request
258              being sent.
259
260              Default: 0
261
262       link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
263              Maximum  number  of  missed  ARP  replies.  If  this  number  is
264              exceeded, link is reported as down.
265
266              Default: 3
267
268       link_watch.source_host | ports.PORTIFNAME.link_watch.source_host (host‐
269       name)
270              Hostname to be converted to IP address which will be filled into
271              ARP request as source address.
272
273              Default: 0.0.0.0
274
275       link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
276       name)
277              Hostname to be converted to IP address which will be filled into
278              ARP request as destination address.
279
280       link_watch.validate_active     |      ports.PORTIFNAME.link_watch.vali‐
281       date_active (bool)
282              Validate  received  ARP  packets on active ports. If this is not
283              set, all incoming ARP packets  will  be  considered  as  a  good
284              reply.
285
286              Default: false
287
288       link_watch.validate_inactive     |    ports.PORTIFNAME.link_watch.vali‐
289       date_inactive (bool)
290              Validate received ARP packets on inactive ports. If this is  not
291              set,  all  incoming  ARP  packets  will  be considered as a good
292              reply.
293
294              Default: false
295
296       link_watch.vlanid | ports.PORTIFNAME.link_watch.vlanid (int)
297              By default, ARP requests are sent without VLAN tags. This option
298              causes  outgoing ARP requests to be sent with the specified VLAN
299              ID number.
300
301              Default: None
302
303       link_watch.send_always | ports.PORTIFNAME.link_watch.send_always (bool)
304              By default, ARP requests are sent on  active  ports  only.  This
305              option allows sending even on inactive ports.
306
307              Default: false
308
310       link_watch.interval | ports.PORTIFNAME.link_watch.interval (int)
311              Value  is  a positive number in milliseconds. It is the interval
312              between sending NS packets.
313
314       link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int)
315              Value is a positive number in  milliseconds.  It  is  the  delay
316              between  link watch initialization and the first NS packet being
317              sent.
318
319       link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int)
320              Maximum number of missed NA reply packets.  If  this  number  is
321              exceeded, link is reported as down.
322
323              Default: 3
324
325       link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (host‐
326       name)
327              Hostname to be converted to IPv6 address which  will  be  filled
328              into NS packet as target address.
329

EXAMPLES

331       {
332         "device": "team0",
333         "runner": {"name": "roundrobin"},
334         "ports": {"eth1": {}, "eth2": {}}
335       }
336
337       Very basic configuration.
338
339       {
340         "device": "team0",
341         "runner": {"name": "activebackup"},
342         "link_watch": {"name": "ethtool"},
343         "ports": {
344           "eth1": {
345             "prio": -10,
346             "sticky": true
347           },
348           "eth2": {
349             "prio": 100
350           }
351         }
352       }
353
354       This configuration uses active-backup runner with ethtool link watcher.
355       Port eth2 has higher priority, but the sticky flag ensures that if eth1
356       becomes active, it stays active while the link remains up.
357
358       {
359         "device": "team0",
360         "runner": {"name": "activebackup"},
361         "link_watch": {
362           "name": "ethtool",
363           "delay_up": 2500,
364           "delay_down": 1000
365         },
366         "ports": {
367           "eth1": {
368             "prio": -10,
369             "sticky": true
370           },
371           "eth2": {
372             "prio": 100
373           }
374         }
375       }
376
377       Similar  to  the previous one. Only difference is that link changes are
378       not propagated to the runner immediately, but delays are applied.
379
380       {
381         "device": "team0",
382         "runner": {"name": "activebackup"},
383         "link_watch":     {
384           "name": "arp_ping",
385           "interval": 100,
386           "missed_max": 30,
387           "target_host": "192.168.23.1"
388         },
389         "ports": {
390           "eth1": {
391             "prio": -10,
392             "sticky": true
393           },
394           "eth2": {
395             "prio": 100
396           }
397         }
398       }
399
400       This configuration uses ARP ping link watch.
401
402       {
403       "device": "team0",
404       "runner": {"name": "activebackup"},
405       "link_watch": [
406         {
407           "name": "arp_ping",
408           "interval": 100,
409           "missed_max": 30,
410           "target_host": "192.168.23.1"
411         },
412         {
413           "name": "arp_ping",
414           "interval": 50,
415           "missed_max": 20,
416           "target_host": "192.168.24.1"
417         }
418       ],
419       "ports": {
420         "eth1": {
421           "prio": -10,
422           "sticky": true
423         },
424         "eth2": {
425           "prio": 100
426           }
427         }
428       }
429
430       Similar to the previous one, only this time two link watchers are  used
431       at the same time.
432
433       {
434         "device": "team0",
435         "runner": {
436           "name": "loadbalance",
437           "tx_hash": ["eth", "ipv4", "ipv6"]
438         },
439         "ports": {"eth1": {}, "eth2": {}}
440       }
441
442       Configuration for hash-based passive Tx load balancing.
443
444       {
445         "device": "team0",
446         "runner": {
447           "name": "loadbalance",
448           "tx_hash": ["eth", "ipv4", "ipv6"],
449           "tx_balancer": {
450             "name": "basic"
451           }
452         },
453         "ports": {"eth1": {}, "eth2": {}}
454       }
455
456       Configuration for active Tx load balancing using basic load balancer.
457
458       {
459         "device": "team0",
460         "runner": {
461           "name": "lacp",
462           "active": true,
463           "fast_rate": true,
464           "tx_hash": ["eth", "ipv4", "ipv6"]
465         },
466         "link_watch": {"name": "ethtool"},
467         "ports": {"eth1": {}, "eth2": {}}
468       }
469
470       Configuration for connection to LACP capable counterpart.
471

SEE ALSO

473       teamd(8), teamdctl(8), teamnl(8), bond2team(1)
474

AUTHOR

476       Jiri Pirko is the original author and current maintainer of libteam.
477
478
479
480libteam                           2013-07-09                     TEAMD.CONF(5)
Impressum